Monday, October 24, 2016

On the winding of power chokes and transformers: Part 1 - Chokes

There are projects that homebrewers undertake that seem not to make sense when one considers the amount of time that it takes to do that thing - but then again, this is often the case when one builds projects at home, from "scratch".  The case in point for this discussion is the winding of power transformers and chokes.

Figure 1:
The finished choke with a rating of about 50 Henries at
200 milliamps, described below.
Click on the image for a larger version.
I am not an expert in this so what follows is, among other things, the documentation of the learning experience:  No doubt I will make mistakes along the way, but analysis of these mistakes and observations will (hopefully) be enlightening.


For some reason I have decided to go "old school" and, along with a friend (Bryan, W7CBM) we will each construct a two-channel single-ended triode audio amplifier using some old WWII vintage tubes (the details on this amplifier to be detailed in a later post.)  To be clear, I'm not of the sort that really believes that the "tube sound" is best for various intangible reasons - in fact, I'm expecting that if it works better than expected, its performance will be worse than practically any other modern audio amplifier that I have in the ways that matter to most people (e.g. power efficiency, noise, hum, distortion, frequency flatness, phase - just name it!)

As it is often the case with a project like this, practicality is somewhat irrelevant:  It is the experience of doing something that you have never done before - and learning from it - that will be the reward whether or not the project is ultimately successful.  To be sure there will not likely be a cost savings in doing this and there certainly will not a time savings!

Diving into this project, we decided to do something that I'd not done for many years:  Wind my own power transformers and chokes.  As for the audio output transformers for this audio project we decided to forgo that task and get "store bought", proven transformers given our current level of experience.

Where to find useful information:

Interestingly, scant useful and credible information seems to be available on the GoogleWeb on this topic, but two sources that are useful are:
  1. Turner Audio (link) - These pages contain much practical advice on power and audio transformers and chokes.  (Refer to the link "Power Transformers and Chokes" (link) and related pages linked from that page.)
  2. Homo-Ludens - Practical transformer winding (link) - While mostly about power transformers, this page also contain practical advice based on hands-on experience of winding, re-winding and reverse-engineering/rebuilding transformers.  There is also another linked page "Transformers and Coils" (link) that has additional information on this topic.
Figure 2:
The NZ-1 "Hand Shake" winder (e.g. "hand cranked") winding
machine, before it was mounted to a firm base.
Click on the image for a larger version.

A few other bits and pieces of information have been found but the above two seem to be some of the best, both written by people who have years of practical experience - and the pages include additional references to other sources.

Rummaging through some old amateur radio and electronics books I have been able to divine other details as well, but much of this information is rather generic and doesn't speak much about the use and capabilities of modern types of wire, insulation or core materials.

Getting the gear together:

Figure 3:
 A simple wire spool holder made from pieces of "1x2"
(actually 0.75" x 1.5")  poplar.  While it works, the variable
feed rate of the wire due to winding it onto a square bobbin
can cause the momentum of the supply spool's rotation to
occasionally "overspool" some wire.  I hope to (soon) 
add additional wire features (e.g. guides, wire-unspool
prevention, etc.) to prevent the wire from getting
tangled as it is unreeled during winding.
Click on the image for a larger version.
Before laying a large number of turns on a transformer bobbin, one must have a minimum of gear on-hand.  Starting out with nothing, I decided to buy an inexpensive ($40, delivered) Chinese-made "NZ-1 Hand Shake" (probably a mistranslation of "Hand Crank") coil winding machine (seen in Figure 2) via EvilBay.

Made mostly from cast steel, this device is about as simple as it can get:  It has some gears that provide an 8:1 ratio for the hand crank (e.g. one revolution of the crank = 8 turns on the coil), a 5-digit turns counter (0-99999) and a spindle on which one would mount the bobbin, and it seems to be built "well enough" for the purpose, likely to last many hundreds of thousands of revolutions.  This device has no wire-handling capability (e.g. nothing to hold the spool, guide the wire or provide tension) so those pieces would have to be constructed and/or improvised.

Figure 4:
 A simple bobbin holder made from scraps of "1x2"
poplar wood and paneling.  The left and middle pieces
are pinned to each other using finishing nails and
are clamped together when mounted on the winder.
"Face identification" marks allow one to keep track
and note the configuration of the wires and the visible
screw on the middle block - and its mate on the back-
side - allow wires attached to the various windings
to be kept out of the way.  To the right, in the background,
are two blocks that are used to help compress the
windings onto the bobbin as they bulge out when
many layers are added.
Click on the image for a larger version.

After the NZ-1 arrived I bolted it to a block of wood and for wire handling of the supply spool I made a simple wooden "A" frame holder (Figure 3), placing it on the workbench behind the winding machine. 

By default, the crank is connected to a shaft that, via gears, results in eight turns on the coil with one turn of the crank. For practical reasons, winding a core like this needs a bit more precision and control and a 1:1 ratio is desired and the obvious place to relocate the crank is the upper shaft with the pulley, visible in Figure 2 on the upper-right corner of the winding machine.

While the shaft size is the same, attaching the crank at this point caused the crank's handle to hit the lower shaft to which the crank is usually connected!  Fortunately I could attach the crank to the very end of the shaft and have it just barely clear the lower shaft where the crank would normally be attached for the 8:1 ratio:  At some point I'll have made a metal piece made that will extend the shaft by a half inch (approximately 1cm) or so.

Also very important is the bobbin holder (Figure 4) constructed from scraps of paneling and "1x2" pieces of poplar.  Because the core size is "E150" - which stands for "1.5", two pieces 1x2 wood (which are really 0.75"x1.5") fit very nicely within the bobbin.  Glued to the end of the bobbin holder on one side is a scrap of paneling as an end-stop while on the other side, oriented by two finishing nails used as pins, are two more pieces of 1x2 glued together as the "other" end stop.  Both the bobbin and the removable end stop are marked to provide a means of orienting the four faces of the bobbin and the removable piece also sports two screws around which wires that are brought out from the bobbin may be wound to keep them out of the way.  Two more blocks of wood, cut to the internal length of the bobbin, are also available to compress the "bulge" that inevitably forms as wire is wound.  Through the center of the bobbin holder was drilled a hole of the size appropriate to allow it to be slipped over the shaft of the winding machine.  When mounted, the bobbin holder is held in by compression of the nut and it does not easily turn on its own.

While not the most elegant of solutions, this arrangement seems to work - provided that one takes care to prevent "overfeeding" of the wire and subsequent possible tangling of off-spooled wire caused by the inconsistent feed rate as one winds onto a square bobbin and the supply wire occasionally tries to wind itself around the piece of "allthread" on which the spool hangs!

The most important aspects of this minimal arrangement are:
  1. A means of automatically counting the turns laid down.  While inductors are somewhat less critical in terms of keeping track of turns with absolute accuracy, when winding transformers you do not want to lose count at all!
  2. The bobbin itself being mounted on a stable, rotating platform about its winding axis.
  3. Being able to manually gauge the tension and guide the wire into the bobbin.
While it would have been pretty easy to make a simple 1:1 gear ratio winding machine from parts laying about, the $40 price was hard to beat!
The goal:

Because a choke is "simpler" to wind than a transformer, I decided to start with that.

The design goal of the choke was to provide at least 10 Henries of inductance with a current capacity of at least 200mA.  The general rule of thumb for a modest-sized transformer or choke is to size the wire to 0.3 to 0.4 mm2 per amp (from references 1 and 2, above) which would imply that for 200mA the minimum wire would need a cross-sectional area of around 0.06mm2 which corresponds to #29 AWG.

Now we need to figure out (approximately) how many turns can fit on our bobbin.

The E150 bobbin that I would be using has an inside dimension of 41.5mm on a side and an outside dimension of 75.4mm on a side with a "width" of 53.3mm and based on this we calculate that the area of the bobbin's window is approximately 53mm wide and 17mm high, or about 901mm2.

Because the cross sectional area of #29 AWG wire, based on a 0.33mm diameter with insulation, is 0.086mm2 this implies that we could theoretically put around 10477 pieces of wire (or turns) in this 109mm2 space.  Practically speaking, this is simply not possible since there is always going to be some "wastage" based on the fact that we will be laying round wires next to each other and there will be gaps in between.  Typically, one would scale this calculation by a "fill factor" to accommodate such wastage with this factor typically being around 0.4 (e.g. 4191 turns) or as high as 0.5 (5239 turns) if one winds extremely carefully and neatly - and assuming no added insulation:  Less careful winding can easily result in "fill factors" lower than 0.4.

Breaking this down differently we can calculate that across the width of the bobbin we should (theoretically) be able to put (53mm/0.33mm = 161) turns on each layer, but taking into account the practicalities of being able to get wire to lie snugly next to its neighbor we can more reasonably expect to achieve around 95% of this, or about 153 turns per layer - maybe slightly more if we are fastidious.

Since our bobbin window is 17mm "tall" we could theoretically expect to be able to stack 51 layers of 0.33 thick wire, but this also assumes both "perfect" stacking efficiency and the lack of any inter-layer insulation.  If we presume that each layer contains a 0.33mm thickness of wire and 0.05mm thickness of insulation - a total of 0.38mm - we can then recalculate that it would take about (17/0.38) 44 layers of windings to fill the bobbin - again, assuming perfect stacking and no need for additional insulation.

For practical reasons we would want to fill our bobbin to only 70-90% "fullness" in order to be able to add the final covering insulation and connecting wires, so this would take or maximum layer count down to between 30 and 39 layers - again, assuming that we didn't need extra insulation anywhere.

If we take an average of 153 turns per layer, we could reasonably expect be able to put between 4590 and 5957 turns on this bobbin.  Taking a median per-turn length of wire on the bobbin to be around 234 mm based on the size of the bobbin itself we can estimate that we'll need between 1074 and 1394 meters, or between 28 and 37% of our 5 pound spool.

At this point I should have referred more closely to link #1, above, to determine the amount of inductance that the calculated number of turns would yield in a typical situation.  To do this, I would have wound a known number of turns onto the bobbin and assembled the coil into a core and made inductance measurements so that the permeability - particularly that when an air gap was added (more on that later) - could be measured, but instead I simply bulldozed ahead and started winding turns!  What this means is that I probably would have wound a coil with fewer turns and, possibly, heavier-gauge wire.

Starting the wind:

With the necessary parts (bobbins and core material being "E150" types from Edcor, the wire and polyimide insulating tape from other sources) I began to wind a choke - the simplest of the devices and (hopefully) hardest to screw up!

Figure 5:
The start of the wind with the ending wire (orange) having been insulated and
secured with polyimide (a.k.a. "Kapton" (tm)) tape.  Note how the end wire
is secured to screws on the wooden bobbin holder to keep it out of the way.
Click on the image for a larger version.
The first step in the wind was to drill a hole in the bobbin through which one of the external connecting wires would be attached.  Carefully scraping - to avoid nicking and weakening the copper - tinning, twisting and then soldering the start of the coil's wire (29 AWG) to the heavier lead that emerged from the bobbin, I insulated it with several layers of polyimide tape, secured it into place on the bobbin, laid down another insulating layer of tape on the bobbin itself and began to wind.

The general advice on winding transformers seems to be that it is best to assure that all wires are laid side-by-side on nice, even layers.  By doing this one avoids one turn from crossing another and putting tremendous point-pressure on the insulation of the two wires as they cross.  This advice hales from the days when wire was insulated with simple, comparatively fragile varnish that could be easily penetrated from vibration due to hum and thermally-related movement and by its softening at higher temperatures, but these days wire is available with much more durable insulation, so it seems to be permissible, with chokes, anyway, to allow wires to cross at shallow angles.

Since this was my first large power choke choke I decided to take the "neat, even layer" approach for the most part, but what this meant was that during each and every turn I would have to watch exactly where the wire turn lay, often scrunching it close to its neighbor if it shifted away and "un-crossing" it if it happened to overlay another.  This also meant that after I finished winding every layer  of approximately 150-156 turns I had to stop, temporarily tape the source wire to the bobbin to keep in from unraveling, apply a layer of 2-mil (0.5mm) polyimide tape over the top of the layer and then resume winding.

As is the case with such things, it is a bit more difficult than one might at first expect.  For the first several layers I had to contend with the "lump" and the "gap" at the "start" end of the bobbin where the external wire was attached and I did this by filling in the void with very thin strips of cardboard laid down to match the width and height of the heavier, connecting wire. After several layers, the layers equaled the height of this "lump" and with the matching thickness of the cardboard strips, it disappeared, allowing me to more easily wind across the entire width.

Figure 6: 
Pausing at the end of winding a layer, securing the end with a piece of
tape and small, plastic clamp.  Already one can see ripples and uneven-ness
starting to appear in the surface of the winding - something that I had to
continually correct as I continued on!
Click on the image for a larger version.

After a dozen or so layers the windings started to form "ripples" caused largely by the fact that the polyimide tape overlapped in approximately the same place each time - a problem exacerbated by the fact that I had, at the time, only one width of tape, limiting the degree to which I could "stagger" the overlap. The biggest problem with this was not necessarily asthetics, but the fact that the as the wire was wound it tended to slip into the trough of the ripple, making it very difficult to maintain evenness and the nice, side-by-side wire lay for which I was striving and with each layer, these irregularities grew!

What I had to do was to fold over some of the tape in half (sticky side facing outwards) to fill in some of the dips and on the square "corners" of the bobbin I started to strategically add small bits of tape to level it out as it was there that the slippage was most likely to occur.  As I got into 10s of layers, some of the troughs seemed to resist the attempts at leveling so I carefully resorted to putting several turns of wire atop each other at shallow angles to help even the surface:  Only a tiny fraction of the turns are wound in this way.

The lesson here is to have on hand multiple widths of tape so that the overlaps may be staggered.  What would have also been handy would have been to have on-hand some pieces of MNM ("Nomex" (tm)) insulating sheets to cut into strips to fill the voids.  It would have also been very help to have some tape that was exactly as wide as the bobbin, but such a specific width is not likely to be commercially available.

The results:

Figure 7: 
The winding of the bobbin is finally complete!
At this point the windings are covered with several layers of polyimide
tape with the soldered and insulated end of the coil captured
underneath.  The uneven-ness of the coil's surface can be seen, but
this degree was just manageable.
Click on the image for a larger version.
Finally, after 4905 turns and 33 layers, the bobbin had filled up:  This number of turns and layers ended up being reasonably close to the range that I'd previously calculated, above!

Based on the DC resistance - around 277 ohms - this represents approximately 3384 feet (1031 meters) of wire - a bit more than a quarter of the 5 pound spool.  Carefully drilling another hole in the bobbin I attached the "finish" wire to the outside of the coil, insulated it and taped it securely into place.

By itself (no core) the inductance of the 4905 turns on the bobbin measured out at approximately 884 millihenries and when the core was installed - butt-stacked, but not carefully aligned - this increased to over 110 Henries:  If I had carefully interleaved all 109 E-I laminations and properly seated each piece this value would have likely been much higher!

Preventing core saturation:

Because this is intended to be used as part of a "choke input" high voltage power supply one must take into account the fact that DC current will flow through the choke, magnetizing the core material.  One property of practically any ferromagnetic material - including the steel used in the core of this choke - is that if the magnetic field exceeds a certain point, the core will saturate and the permeability (and the inductance) will drop!  While there are chokes that are designed to do this (e.g. "swinging" chokes - those that have inductance that, by design, drops with increased current) it was desired that this choke's inductance be more consistent.

The most common way to control this effect is to intentionally reduce the permeability of the core overall by the introduction of an air gap which breaks up the magnetic lines of force:  The larger the gap, the greater the reduction.  The trick here is to reduce the permeability - and thus the level of magnetization - to the level that the core does not saturate at the desired current, but not too much more.

Fortunately, there are some equations that help us calculate things like this.  Knowing that the core material that I am using ("M6" Grain-Oriented Silicon Steel) will safely operate at 1.6-1.7 Tesla (16000-17000 Gauss) I could crunch some numbers based on what I already knew.

Taking the following equation from link #1 (the Turner audio web pages), above:

uE = (109 * ML * L) / (T2 * Afe)

  • uE = The permeability of the core
  • ML = Iron path length in mm (approx. the width plus length of a single assembled E-I section)
  • L = The measured inductance in Henries
  • T = Number of turns
  • Afe = The cross-sectional area of the iron inside the bobbin in mm2
Since I was using E150 cores with square bobbins, Afe = 382mm and ML = 209mm

With the temporary butt-stacked arrangement (e.g. full of small gaps due to pieces not aligning perfectly) yielding approximately 110 Henries the calculated permeability was 525:  If I'd very carefully interleaved and stacked each lamination, I would have expected the permeability to be in the multi-thousand range!

Taking this into another equation, we can calculate the magnetic field strength using this equation, also from link #1:

Bdc = (12.6 * uE * T * Idc) / (ML * 10000)

  • Bdc = Magnetic field, in Tesla.
  • uE = The permeability of the core.
  • T = Number of turns.
  • Idc = DC current, in amps - the target being 0.2 amps for this design.
  • ML = Iron path length in mm, as above
Crunching the numbers, above, we find that Bdc is approximately 3.1 Tesla - a bit too high for our M6 material, so we need to add more gap than is present due to the haphazard stacking of the material.  All things being equal, since the magnetic field strength is proportional to permeability and since inductance is proportional to permeability, we now know that if we can add enough of a gap to reduce the inductance by approximately half we will be in the ballpark.

Figure 8:
The arrangement of the E-I laminations and the insertion of insulating
strips used to add an air gap to reduce the overall core permeability
and prevent core saturation.
In the middle and on the ends are single "E" sections that are interleaved
to help hold the other sections in place via compression when the screws
are installed and tightened.
Click on the image for a larger version.
Placing some 10 mil (.25mm) thick "fish" paper between the "E" and "I" pieces I observed that the inductance was now around 51 Henries which meant that the magnetic field was approximately 1.44 Tesla - within reasonable range of the M6 core material that I'm using.

After the original assembly of the core, I had to disassemble the choke's laminations again to mount the end bells as shown in Figure 1, above.  In the process I used a wooden block to more evenly seat the laminations:  Between this and cinching the screws tightly, the gap between the E and I sections was slightly reduced and the measured inductance went up to around 75 Henries - a bit too high to prevent saturation at 200 mA according to our previous calculations.  Adding a second piece of 10 mil insulating paper - for a total of 20 mils (0.5mm) of thickness (or 40 mils overall in the magnetic path) reduced the inductance to around 54 Henries as can be seen below in Figure 9.

Final thoughts:

For the time being - and since this is a prototype - I'll leave the design alone, but it brings up some points for the next time that I wind a similar choke:
  • Now that I have some actual, practical numbers for this particular core material and the effects of gapping, I should be able to predict ahead of time with reasonable accuracy the outcome of a particular winding configuration.  Using a different core (size, material) I would be certain to do a "test winding" to divine its aspects prior to finalization of the design.
  • 51 Henries is a bit higher than is typically used for a power supply choke, providing an inductive reactance of around 38k Ohms at 120 Hz, the AC ripple frequency for full-wave rectified 60 Hz mains.  (It would be about 32k for 100 Hz on 50 Hz mains.)  In general, the higher the inductance, the better - provided care is taken to avoid resonances related to the filter capacitance and the load.  The higher inductance will reduce the ripple - particularly important for single-ended triode amplifiers.
  • The design goal of the choke was to handle at least 200mA.  With a DC resistance of 277 ohms approximately 55 volts will be lost across the choke due to this resistance at the design current, implying a heat load of approximately 11 watts (resistive only and not including core losses) - an acceptable amount for a core this size.  The voltage drop - while higher than desired - can be simply accepted and/or taken into account when designing and building the high-voltage plate transformer and associated supply.
  • A more conventional inductance for a choke-input power supply is in the area of 3-10 Henries, so this design is likely an overkill - but then again, reducing the ripple on the power supply of a single-ended triode amplifier is not a bad thing! 
Were I to have more carefully measured the "typical" permeability with a fewer number of turns I might have cut the number of turns in half and in so doing I would have (theoretically) ended up with about one quarter of the inductance - around 12-13 Henries - and half of the resistive loss.  However, based on the equation just above, if I did that I could also decrease the size of the gap to increase the overall permeability and, again, boost the inductance without fear of saturating the core - which could permit a further reduction in the number of turns and winding loss.

Figure 9:
The completed choke (upper right) being measured on my old General Radio GR-1650A impedance bridge.
The measured inductance is approximately 54 Henries with a Q of approximately 4.1 at a frequency
of 1 kHz.  By the time I was able to take the picture, the inductance had drifted very slightly due to
change in temperature, moving the meter's needle away from a good null.
Click on the image for a larger version.

Doing this would allow the use of a slightly heavier gauge of wire to be used on the same-sized bobbin to further decrease ohmic losses and increase current-handling capacity.  In short, I suspect that something in the range of 1600-2000 turns of 27 AWG and a reduced air gap would yield something fairly close to a 10 Henry inductor with significantly reduced DC resistance (e.g. in the range of 55-75 ohms) with the current rating being restricted by the desired minimum inductance rather than the heat dissipated due to Ohmic losses.

At some point on the near-ish future, another such inductor will be wound but we have yet to determine the form it will take:  Will it be as neatly wound as this, or will it be done so that the wires are allowed to cross at shallow angles.  Will it have about as many turns as this choke or will it use a heavier conductor with fewer turns for less - but adequate - inductance?

Stay tuned!

The next phase of this project will be to wind a filament transformer. 

Wednesday, September 28, 2016

Automatic volume tracking for a TV

A couple of years ago I was given a non-working 50" Philips flat-panel TV.  As is often the case with these things, there was actually very little wrong with it and in this instance it was a pair of bad capacitors in the power supply and it cost about $3 to fix.

With the working TV, I now had another problem:  The internal speakers sounded terrible!

Rummaging around I found an old JVC 35 watt/channel audio amplifier/tuner from the 1980's and using the "audio output" on the back of the TV I connected it to the amplifier and that to a pair of high-efficiency JBL 12" 4-way speakers:  It sounded pretty good - lots of volume, good highs and thundering bass with "only" 35 watts (I wouldn't need a subwoofer!) - but the volume control had no effect.  Finding another cable, I then tried the headphone jack on the side of the TV, but its volume wasn't affected!

WTH, Philips?!?

Figure 1:
A view of the volume tracker showing the connections,
indicator and controls.
I've seen this same issue on at least one other TV, but in some of them one can find an option - often deep in a configuration - that allows one to make the audio line output track the volume control - but not this one.  What this meant was that if I was using an external audio amplifier and I wanted to be able to adjust the TV volume with a remote I would need either two remotes, or program a universal remote to do the task.  This latter point isn't too much of a problem as there are many universal remotes that can be configured to split tasks amongst different boxes, but the audio amplifier that I was planning to use (and the only one that I had that would fit in the TV stand)  - the 1984 JVC tuner/amplifier - did not have a remote.

So, I did what any nerd type would do:  I threw a computer at it!

It occurred to me that I did have a reference on which I could base an outboard volume control:  The internal speakers of the TV.  I surmised that I could "listen" to the audio level coming out of the speakers, compare it to the fixed-level audio line output from the TV and based on that, adjust the volume of an outboard amplifier based on the difference.  I figured that this could work if I could place a sense microphone very close to the speaker and in this way the sound level at the microphone would be very high as compared to the room volume of the external loudspeakers and I could make it so that not only would the TV's internal speakers still be quite low for a fairly high volume from the outboard speakers, but also prevent sound from the outboard speakers from being picked up by the microphone and cause the volume to increase even more in a feedback loop.

 Instead of using a microphone I could have tapped into the audio from the internal speakers, but moving this TV and taking it apart is quite difficult with just one person - and I wanted to try out the "microphone" approach, first.

To test this theory I hacked together a bit of code that did nothing but measure the audio level from two sources:  A microphone and the audio line output and then dump those levels, in dB, to the serial port where I could see what was going on .  It seemed to look pretty good as the two audio sources seemed to track fairly well.

I could now get down to the task of writing some software and building a dedicated board.

Using 5 MIPS of computing power to adjust a volume control

I chose a PIC16F88 for this task.  This processor has a whopping 368 bytes of RAM and 8 kwords for program space.  It also has an onboard 10 bit A/D converter and UART so that it can both accumulate analog data (audio, in this case) and send out statistics to a serial port so that the results could be analyzed.

Several years ago I'd written some code for the PIC (in C) that took audio fed into the A/D port and calculated the average volume and spit it out in dB below "full scale" of the A/D converter - all based on integer math - and in comparing it to a genuine, mechanical VU meter and found that it matched quite closely.  In testing, this code seemed to be perfectly capable of providing accurate readings (to within better than 1dB) to about 100 kHz - a frequency well above the actual sample rate.  With 10 bits of A/D conversion, I had a usable dynamic range of a bit over 60dB due to "oversampling".

Starting with that code I rewrote it so that it would be able to take two separate channels of audio and produce the sound level, in dB, at an update rate of about 10 readings per second.

In a nutshell, the code works like this:
  • In an interrupt, an A/D reading is taken from audio channel A and converted to a signed integer.
  • At the next interrupt, an A/D reading is taken from audio channel B and converted to a signed integer.
    • The absolute value of that A/D reading is taken an summed with past readings from this same channel.
    • An A/D reading is taken from audio channel B and processed separately in the same way as channel A.  This "interleaving" of readings helps assure that both channels are treated the same way and get similar results.
    • Any instances of hitting "at or near maximum" on an A/D converter set a flag to indicate that we have "clipped" our audio and that the results may be suspected to be bad.
  • Once the sum of 500 readings has been taken the code signals that new results are ready.  Since we have only one A/D converter, we can't digitize both channels simultaneously so spreading the readings over time helps assure that the readings would be very similar if the same audio were present in both channels.
  • The main code copies the results of the accumulation of data, clears the summing registers and restarts the interrupt so that new data can be gathered.
  • The sums of the absolute values of each audio channel are then converted into two separate dB readings using a set of lookup tables and interpolation, the result being accurate to with 1dB.  The numerical reading in deciBels is important since ratiometric differences in the two channels remain constant regardless of the absolute amplitude.
As noted above the audio is being "undersampled" since the frequency of the audio content exceeds half the actual sampling rate - and also that because there is only one A/D converter that is multiplexed, the two audio sources, the microphone and program audio, are not being sampled at exactly the same time.

For our purposes, this is irrelevant as audio is generally redundant in the nature - and we are looking at absolute voltage levels rather than anything having to do with spectral content.  What this means is that if you feed the same content into both inputs at the same time, you get completely identical results from the VU meter readings with the amplitude staying flat within 1 dB to about 100 kHz - the frequency that is (more or less) limited by circuit capacitance.
Having converted the audio levels to dB readings actually makes the task of comparing audio levels much simpler as our two audio sources - a sample of the line out from the TV and the audio from the TV's speaker - should following each other, differing only in the loudness difference between the two sources.  In other words, all I really needed to do was to adjust the audio line output level so that it tracked the audio level difference between the two sources:  If I increased the audio level to the speaker by 6 dB, I would see that in the output from the microphone and know to increase the line output to the audio amplifier by 6dB as well.

Figure 2:
The "guts" of the volume controller.  In the upper left one can see U1, the input and output buffers and to its
right U2, the electronic potentiometers.  In the lower left is U3, the microphone and comparison
filter/amplifiers and in the upper-right, U4, the PIC16F88 processor.  Just below U4 are the
power supply components.
Click on the image for a larger version.

Sounds simple, right?

Actually, it's fairly tricky when you get right down to it.

While 60dB sounds like a lot of audio range - and it is, in fact, a million-to-one level difference - and it turns out that we can easily experience this in normal TV programming between a loud explosion and very quiet parts of the audio, particularly when you take into account the fact that one could easily use at least 30dB of that range in the volume control alone!

Since this PIC has only a 10 bit A/D converter, the range was theoretically limited to about 60 dB so I configured each audio channel to have a switchable "high/low" gain setting so that an extra 20 dB or so could be measured.  Since that amount of gain difference was always the same when I switched settings, I could simply add that gain into the dB reading to compensate for the difference.  The way this worked was that if I saw an audio level that was too low, I would switch in the extra gain but if it was too high, I would switch it out.  The result was that I now had about 80 dB of usable measurement range - not bad for a cheap computer chip with only 10 bits of A/D and a couple of op amps!

Knowing when not to act:

There's yet another problem to consider.  If we have audio from both the line input and the speaker, we can easily make a comparison and determine which is louder/quieter, but what if the audio from one source - or both sources - is too low to make a valid comparison?

An obvious example of this would be during a brief pause at the time of a commercial break - or, perhaps, when you are loading a video disc or waiting for a program stream to start.  If you didn't detect the "silence" and prevent the unit from adjusting the volume control, it would probably go out into the weeds whenever it was "too quiet."  The work-around is, of course, to have the computer detect quiet parts and not make adjustments at those times.

Another time during which we should not act is, as noted above, during those instance when we suspect that one or both the input channels is clipping:  That's easy - just set a threshold above which you do not make any decisions to adjust the volume.

Another concern is clipping - which can come from several sources.  As noted previously, too high an input level could exceed the dynamic range of the A/D converter on the PIC and if this happens, two things can occur:
  • Our readings are bogus since we don't know by how much the audio exceeded the range.
  • On the PIC - as is the case with many CMOS analog MUXes - if you exceed the voltage range of the input by going above the supply voltage or below ground you can affect the other A/D channels - even if they are not selected.  In other words, clipping in one channel will probably wreck the readings in the other channels as well!
If the gain of an audio channel has already been set to low and we are still clipping in that channel, the proper course of action would be to simply ignore that reading:  If the clipping is happening fairly infrequently, we can afford to do this as we will soon get another reading that we can use.

Figure 3:
The automatic volume tracking box, sitting behind and under the TV.
Another source of clipping can be the TV's own speaker amplifier and/or the microphone near it.  Since the sense microphone is placed right in front of the speaker it is possible that on audio peaks we could hit the maximum level of which it is capable.  This condition is a bit harder to detect since we aren't actually causing clipping of the A/D converter in this case so all we can really do is to be careful in the initial setup of the system so that we don't encounter either instance.

Differences in audio sources:

One problem with using a microphone to pick up speaker audio is that the audio detected via that route will not sound the same as the "pristine" audio output from the back of the TV.  This inevitable result is due to neither the speaker or microphone being perfect in their ability to faithfully produce their outputs:  There will always be at least a small difference due to frequency response differences and resonances.

In order to minimize these differences I decided to purposely limit the frequency range over which the audio from either the microphone or the line out would be analyzed and this was done using a low-Q 1 kHz bandpass filter.  The idea here is to pass audio only in the low-middle range of what can be heard and the general audio range in which most of the audio is actually present.  The "low Q" audio filter means that while its passband is centered at 1 kHz, it doesn't attenuate lower or higher frequency particularly quickly - but it is sure to knock down the low bass or the highest treble significantly - those being the frequencies at which the speaker/microphone combination is likely to have the least fidelity and depart from the sample from the line input.

Putting it all together:

The schematic diagram of this circuit is shown below.

Figure 4: 
Schematic diagram of the automatic volume tracker.
Click on an image for a larger version.

Only the left channel audio path will be described:  The right channel audio path is identical.

The audio from the TV's LINE OUT is fed in via C101 to a unity-gain follower op amp circuit (U1a) which is biased by R101 at a mid supply voltage, 2.5 volts.  The output of U1a is then fed to an electronic potentiometer, U2a which is also biased from the mid-supply voltage so that variations in U2a's settings do not cause a DC offset to occur.  The "wiper" of the U2a potentiometer is connected to another unity-gain follower, U1d which is then connected to the audio output.

Because unity-gain followers are used, there is no audio gain provided in this circuit, but setting the electronic potentiometer to "full scale" will result in very little attenuation of the audio being passed through the system, pretty much as if this entire circuit were bypassed.  When the electronic potentiometer is at the bottom end of its scale the attenuation is over 50 dB - more than enough range for our purposes.

Because there is only one "sense" microphone, jumper JP1 is used to select the corresponding LINE IN audio channel for the channel being monitored with the microphone:  Since it it is common for the left/right channel content to differ quite considerably in stereo programs, it is important to make sure that JP1 is set properly!

From JP1 the selected signal goes to a 20-ish dB pad consisting of R301/R302 and then to a non-inverting amplifier built around U3a.  It may seem odd to attenuate a signal just to amplify it again, but this configuration was made during the initial design stages when it wasn't certain what the absolute audio levels would be.  It was also desired that both channels have similar circuitry and as such, amplifier U3a's gain adjustment mechanism would work only if it had a fairly high gain to begin with.

If the processor detects that the line level audio is too low, it sets the PROG_GAIN to a LOW state, effectively ground it and the bottom end of R304 and causing the gain to increase by nearly 20dB.  C302 is used to prevent this action from causing a DC bias offset and R305 provides a bit of constant leakage to prevent C302 from discharging if the amplifier is in the "Low gain" mode (e.g. the "PROG_GAIN" pin set to a high-impedance state.)  Conversely, if the audio is high enough that the extra 20dB is not needed, this pin is switched to be a digital input.
Figure 5:
Placement of the "sense" microphone:  Against the speaker!
Surprisingly, the microphone doesn't seem to be very prone
to "clipping" or saturation even at high speaker volume
levels - something that could skew the comparison. The
microphone is taped at the edge of the cone using
polyimide tape, mostly out of sight.

In testing, it was noted that better performance was obtained by setting the PIC's input to a "Digital input" mode rather than an "A/D" input mode, this due to the fact that because of C302, the signals on this pin go swing below ground potential.  When this happened while in "A/D input" mode, the other A/D channels (on the same MUX) were badly disrupted and excessive clipping on the input of that pin caused audio distortion on U3a as well as affected the apparent gain of the stage as well.

From U3a the LINE audio signal passes through a low-Q audio bandpass filter consisting of U3d centered at about 1 kHz.  This filter's passband is (more or less) in the middle of the audio spectrum in which much of the energy of speech and music is contained and the thought is that discrepancies between the qualities of the audio directly coupled from the LINE input and those picked up by the microphone after they have first been reproduced by the speaker would be reduced.

By the time the audio leaves U3d, the bandpass filter, its level is close-ish to 5 volts peak-peak for the highest level audio that would be present with U3a in the "low" gain mode - this being done to maximize the dynamic range of the PIC's A/D converter.  By virtue of the AC coupling of the bandpass filter (via C303) and the 2.5 volt bias on U3d, pin 12, the output of the filter is also centered at 2.5 volts, mid-scale for the PIC's A/D converter.

The audio from the microphone follows a similar path except that its variable-gain amplifier, U3b, has more maximum gain to compensate for the relatively low level from the microphone as compared to the LINE input.  As with the LINE audio, it too is bandpass-filtered at 1 kHz, this time by U3c, the output of which is sent to the PIC.

Monitoring and adjustment:

One may notice that there is also a serial port (output) shown on the diagram, the DE-9 connector being just visible in Figure 3 along with "Gain+", "Gain-" buttons and a dual-color LED.  In operation the serial output is constantly sending the detected microphone audio level, the program audio level, the difference between the two, the difference between the actual audio level and the level predicted based on it and the gain setting and whether or not the A/D was at or near clipping.

During TV program audio - particularly at a high volume level - this "telemetry" data is used to determine how well the device is tracking the audio and whether adjustments are needed using the "Gain+" and "Gain-" buttons which adjust how much speaker audio correlates with the attenuation that the processor applies to U2, the digital potentiometer and save the settings in the processor's nonvolatile (flash) memory. The LED is used to provide a visual indication of what the devices is doing:  Red indicates that the audio "gain" is being increased (e.g. the attenuation of U2 decreased) while green indicates a decrease in audio gain while a yellow color - created by the processor switching between red and green rapidly - is used to indicate that U2 as at minimum attenuation - which is the same as maximum volume.

How well does it work?

I've been using this on the TV for nearly 3 years now and only tweaked the code slightly after installing it.  At low volume levels it does tend to "hunt" a little bit (e.g. volume go up and down several dB - usually not noticeable) with differing program material - likely due to some "spillover" from the large, room speakers into the sense microphone, but at normal volume levels it is quite consistent.  There are a few instances that do still cause it to "hunt" (some types of music, a few specific voices) but its not been severe enough for me to record such clips to the DVR and try to figure out what, exactly is happening.

Monday, August 22, 2016

The solar saga - part 2: Getting the system online

In part one (March 2, 2016) I wrote about why I chose to use a series string inverter system.  (Hint:  It was to prevent radio-frequency interference.)  To read part one, click on the link here:  The Solar Saga - Part 1:  Avoiding Interference (Why I did not choose microinverters.)

Eventually, I was able to get the system "online" in late April even though everything solar-related had been in place for over 2 months.  Why the delay?

Figure 1:
Slip-sliding around, the work crew clearing ice and snow off
the metal garage roof.  Later, they wielded a propane
"weed burner" to loosen the remaining ice and snow and
dry the metal roof panels.
Click on the image for a larger version.
As is often the case in life, things don't always go exactly according to plan!

Installing the panels:

Going back a bit, the solar panels and inverter were actually installed in February.

In the winter.

In the snow.

Think about that for just a moment, particularly considering when the system was actually put online.

Figure 2:
The mounting rails for the solar panels, installed.  If you
look carefully in the background you can see where
someone fell hard on the ridge cap, slightly crushing it!
Click on the image for a larger version.
To be sure, there was a lot to do, but everything seemed to be going according to plan, with no obvious trouble (of which I was ever informed) with the city in pulling permits.

Part of the install actually began a few weeks before this when a survey crew came out to check out the "sun situation" where the panels were to be installed.  As soon as they arrived they placed a ladder against the garage roof and then I heard some muttering:  They'd just realized that they had left the device that analyzes the sun's path and potential shadows at the previous job some 15-20 miles north in another county.  Instead of being able to plop this device down in the locations of the solar arrays they squinted with expert eyes at the trees and sky and declared more or less "I don't think that shade will be a problem."  Who was I to argue - they were professionals!

Figure 3:
The installed eastern solar array  At the time that the picture
was taken the system was wired up, but without a net meter it
couldn't be "officially" used.
Click on the image for a larger version.
Aside from lots of slipping and sliding on the snow-covered metal roof, the installation seemed to go well with the rails having been installed the first day and the roof penetration - a set of rubber "Flasher" bushing type thing being used to seal about the electrical conduit that emerged from from within the garage itself, one for each array.

A few days later the panels were on the roof and the inverter wired up and connected to the new electrical sub-panel that I'd put in the garage a few weeks before.  While the system worked, I really couldn't use it as the "Net Meter" was not yet installed and any excess power that I produced would be charged to me just as if I'd actually used it!
Figure 4:
A screen shot of the system producing just under 1400 watts
in "Standalone" or "Island" mode - a configuration that allows
the solar electric system to produce useful energy even if the power
grid was down, unlike a microinverter system where the potential
electrical solar power from the panels is completely
inaccessible if the power is out!
Click on the image for a larger version.
While I could not do the "net meter" thing, one feature provided by the Sunny Boy inverters - but not with microinverter systems - is the "Secure Power System" or "SPS" (tm) that will allow power "islanding".  In other words, if the main breaker is shut off and a switch is flipped, the inverter will provide up to 1.5kW of power (12 amps, 125 volts) - even if the mains power is unavailable, provided that there is adequate sun, of course:  Just try that with a microinverter system!

Figure 5:
The installed Sunnyboy 5+kW inverter and the garage
sub-panel to which it is connected.   Below and
to the left of the inverter is the DC disconnect switch
for the two independent MPPT solar panel strings
(the "East" and "West") and just to the right
of it is the "SPS" or "Standalone" power outlet capable of
providing up to 12 amps at 125 volts (1.5kW) even
if the electrical grid is offline.
Click on the image for a larger version.

A problem with the electrical service entrance:

From the beginning I was informed that my main electrical panel - the place where the power from underground gets to the house - would have to be replaced.  Fortunately, this cost was "baked into" the cost of the system itself and since I'd replaced the sub-panel in my garage myself, the installers would cover it.  In asking around I determined that in my case, the typical cost for this would be in the $1000-$1500 range including all parts and labor.

The reason that the old panel had to be replaced was ostensibly due to the "20% rule", and in my case it went something like this:

My panel, originally installed when the house was new (early 1970s) was rated for 100 amps on the bus.  The "20% rule" said that it was permissible to have 20% above this value, or up to 120 amps.  The problem was that my photovoltaic system would, being capable of 5.3 kilowatts, could in theory of being capable of putting 22-24 amps (depending on voltage) on the bus and this, combined with a 100 amp main breaker, meant that I could put a total of 124 amps on the bus.

This would not pass muster - or inspection - so the panel was to be upgraded to a 125 amp unit with a 100 amp breaker which, according to the same rule, should allow a total of 150 amps on the bus.

A few days before the date in late February when the service upgrade was scheduled I got a call from the contractor saying that they couldn't do it:  The power company would not sign off because of the location of the gas line and meter with respect to the electrical panel itself.  After being informed of this I took a walk through my neighborhood and observed that about a third of the houses had their electrical panels "on top" of the gas meters.

Figure 6:
 The old meter and below it, the gas meter for the
house:  Since the solar was now connected, it was
required that the red warning tag be attached even
before the net meter was installed.
The power company ultimately determined that the
electrical panel and its underground conduit had to be
relocated to a minimum of 36" (about 92cm) distant from
the gas meter and any of its piping.  I couldn't be sure,
but it looked as though the gas meter was installed
after the electrical with the original riser pipe for the
electrical being wedged between the
house and the gas meter!
Click on the image for a larger version.
As it turns out, current code in this area requires at least a 36 inch separation between the closest part of the gas line and meter and any part of the electrical entrance.  What was surprising is that none of the contractors that had visited my place to assess the scope of work had caught this, let alone planned for it!

In my case the gas meter was literally touching the conduit from the underground power feed and the panel itself was about 3 feet above the gas meter.  Since the panel was to be upgraded, it had to meet current code so it would have to be moved.  This also meant that the underground power feed, which was a length of "direct burial" wire would also have to be redone, placed in 4" conduit and run to the location of the new meter.

All of this meant a delay - about 4 weeks, as it turned out - as the plans had to be revised and arrangements had to be made to coordinate the schedule of the electrical contractor with the schedule of the city inspector along with having "Blue Stakes" come out and mark the utilities so that yet another contractor could dig a trench in my front yard from the power junction to the location of the new meter.  While this meant that it would be another 4 weeks or so before the work would be done - and likely another 2-3 weeks after that before my "Net Meter" would be installed - it also meant that the company in charge of all of the work would end up "eating" the difference in cost which was probably something in the area of an extra $1000-$1500 on top of the already-allocated cost for panel replacement.


There is a device called a "Connect DER" (tm) that may be used in many locales to bypass the need to upgrade or replace the house's electrical service when solar power is installed.  This device plugs into the original meter base and is sandwiched between the original panel and the power company's electrical meter so that the additional current of the solar power system does not appear on the house's electrical bus since the connections to the solar is made on the Connect DER itself with a built-in circuit breaker for electrical protection and to allow it to be disconnected.

For various reasons this sort of device was apparently not an option in my case - possibly because of the fact that the gas and electrical piping were co-located. 

Figure 7:
The narrow trench that magically appeared in my yard,
running between the underground junction from the
electrical utility to the approximate location of
the new panel.
Click on the image for a larger version.
Finally, the day arrived where there appeared a trench in my front yard (figure 7).  A few days later the electricians arrived and installed the new panel and a conduit in the trench - but I couldn't help but notice that it was only about 23" from the gas lines, a fact that I mentioned to them when they arrived the next morning (figure 8).  After a bit of digging and drilling, the new panel and conduit was suddenly another 18" or so farther away from gas meter than it had been.  At about this time the power company and city inspector showed up to disconnect the power from the mains and pull new conductors into the conduit under the watchful eye of the inspector who gave preliminary approval to the work plan.

Off came the old panel to be replaced with a weatherproof junction box, connected to the new panel with a run of conduit.  In the new junction box - at the location of the original panel - a "horse tail" of wires appeared representing the individual circuits in the house, each of which had to be spliced with a new set of conductors wired to the new panel.  After about 4 hours of work everything was turned back on and I was back in business.

Figure 8:
Oops!  That ain't no 36" separation between the gas
and electrical!  The vertical pipe on the left, against the
brick was the conduit conveying the electrical to
the old meter.
Click on the image for a larger version.
A day or two later the city inspector came back to meet with me and a representative of the contractor to survey the work done both with the installation of the new panel and the photovoltaic system.  Finding everything to his satisfaction he gave his approval which also meant that the power company was notified so that the "Net Meter" would be installed.  A couple days after this a small work crew from a landscaping company appeared, filling in the trench in the front yard and replanting the sod that had been removed.

About two and a half weeks later I came home from work to find a notice from the power company stuck to my front door indicating that the net meter had, in fact, been installed so I happily closed the necessary breakers to put the system online.  Since it was already the evening, not much power was produced that day, but it was now ready for the next days' sun!

* * *

Figure 9:
In mid-job, the "horse tail" wires from the original
breaker panel emerging from the junction box that
had been installed in the approximate location of the
original breaker panel.  New wires were run
between it and the new service entrance/breaker panel.
I have since painted the new junction box and conduit a
red color to somewhat match the brickwork.
Click on the image for a larger version.
A minor shading problem:

As of this posting it has now been about four months since the system was put online and it has been working quite well.  One minor complication - something that I would have addressed earlier had I been aware of it - has to do with the fact that some of the eastern panels were located where they get shade until a bit after noon, reducing the output of the east array by 15-20% during that time.

By late March I was noticing that the northern-most panel was starting to be shaded by a nearby pine tree and by June and July, the angle of the sun had precessed to the point where at least three panels were being completely shaded in the morning, the shade finally clearing about an hour before "local noon" - or around 12:45.  Had I been aware of this I might have requested to have the east panels arranged somewhat differently to reduce this effect as there is plenty of room on the roof to do this

Would microinverters have improved this situation?  Perhaps by only 5% or so:  The real problem is that the panels get shadowed lengthwise, equally affecting each of the three sections of the panel isolated by the built-in "shade tolerance" diodes so these diodes don't have a useful effect when shading occurs in that aspect which means that it would be more difficult to extract power from it by any means.

A month or so after installation I was able to get them to come back out and do the formal shade analysis that they hadn't done before starting the job and it confirmed what was empirically observed - plus it gave a bit of information as to what problems could arise in the future in terms of tree growth.

* * *

No RF noise at all!

As far as my one of my original concerns - that of generated RF noise - I can detect absolutely nothing from the photovoltaic system at all at any frequency.

The "quiet-ness" of the system can be borne out by the fact that even if one brings a portable receiver right up to the panels or the inverter, nothing at all can be heard from it except when its antenna gets within a few inches of the inverter's LCD panel.

Where I do get some RF noise is from sources unrelated to the solar power - switching type "wall warts" scattered throughout my house, powering various things, but most of the "problem" devices have already been quieted as described in previous postings on this blog, see:

* * *
Generation of power - observations:

In the (over) four months of operation the cumulative amount of power has exceeded my actual usage by about 300 kWh so the recent power bills have been low - just the "minimum charge" of less than $10.  According to my calculations based on past and current usage I expect to use up that surplus in the winter when the "production" of the the photovoltaic system will be much lower due to the lower sun angle, shorter days, occasional snow cover and the tendency for there to be extended temperature inversions that can block sun for days at a time.

At the moment I do not have a "refrigeration" whole-house air conditioner - only an evaporative (a.k.a. "swamp") cooler and a wheel-around "room" air conditioner for those relatively rare days that it is both hot and humid, but I'm considering getting a whole-house A/C sometime in the future:  When I do that I may consider increasing the capacity of my photovoltaic system.

While the system has eighteen 285 watt panels which are theoretically capable of 5130 watts, the slant of the roof (north-south ridge line with panels mounted flat on the east and wet sides), the operating temperature of the panels (an output power reduction of 0.45% per degree C panel temperature) and the actual solar insolation (e.g. the actual amount of solar energy) has limited the peak power to around 3800 watts on hotter, crystal clear days and about 4300 watts on cooler days.

If one does the  numbers this should not be too surprising.  For example, the 285 watt panel rating assumes a cell temperature of 25C.  On a hot summer day where the ambient air temperature is around 38C and the panels themselves are around 50C (a fairly modest temperature as my roof is metal and very light-colored which keeps it quite cool) that means that assuming a temperature derating of about 0.5%/C that I have lost - from heat alone - 12.5% of power, or can expect only about 250 watts per panel, or 4500 watts from the system - and that would assume that the sun was illuminating the panel at optimal right-angles - which it really cannot at any time of year.

Since my panels are on a roof with a moderate east-west pitch, I lose another 15% or so of solar insolation on a typical summer day due to the angle, yielding a number that is actually pretty close to the 3800 watt peak.  What I have observed is that because of the east/west angle of the two strings that I have a slight "double" peak around "local" noon when, before noon the angle is nearly optimal for the east array and then similarly, after noon, for the west array.  To make matters worse, during many days of the summer in recent years the valley's air is a bit murky due to some smog and the frequent wildfires that seem to be a regular occurrence in the western U.S. these days, knocking off another 10-15% of production.

What this means is that, in theory, I could have used a 4kW inverter if I was willing to tolerate a bit of "clipping" (e.g. more available photovoltaic power than the inverter will produce) on optimal days (e.g. cool spring or fall days with clear air) and possibly have averted all of the hassle with the 20% rule and the replacement of the electrical panel, but considering the state of the older electrical distribution panel, its replacement was probably for the best! If I wished to do so, I could (in theory) add another 4 panels to my system which would just about bring it to clipping under optimal conditions and to around 4600 watts on a normal, summer day.

The other option - if I needed more capacity after, say, adding a house air conditioning unit - would be to simply install another set of panels - maybe 14-16 or so - and a separate inverter - a 3.6 kW unit, perhaps:  This would still stay within the 20% rule for the new electrical panel and add a degree added redundancy.  Since the "hard" work (e.g. update of the electrical, etc.) has already been done, such an addition would be comparatively easy.

As of the time of this writing (mid August, 2016) it would appear that the company that I used for the installation of my system (Auric Solar) will no longer consider the use of series-string photovoltaic systems, at least for residential customers - a statement based on a conversation a friend and fellow amateur radio operator had with a company representative.  The impression given - perhaps unintentionally - was that they had enough business that they didn't necessarily need to offer flexibility or other system options to their potential customers.

What this means is that for fellow amateur radio operators who wish to avoid an "RF noisy" installation, I've recently been suggesting another company.

To read part one of this article click on the link here:  The Solar Saga - Part 1:  Avoiding Interference (Why I did not choose microinverters.)

Tuesday, August 9, 2016

A low-voltage disconnect for 12 volt lead acid and lithium batteries

Figure 1:
The as-built and working prototype constructed.
Click on the image for a larger version.
There are two things that you don't want to do with any rechargeable battery on a routine basis:
  • Overcharge it.
  • Overdischarge it.
While the above are true for lead-acid batteries, they are particularly true of Lithium-Ion chemistries, but for different reasons.

With Lead-acid batteries:
  • Lead-acid batteries - particularly the "flooded cell" types (e.g. those to which you can add water) can handle quite a bit of overcharging as long as the electrolyte level is maintained.  "Sealed" batteries (e.g. AGM, or those that many mistakenly called "gel" cells) can handle some overcharging, but only to an extent before their pressure vents release accumulated gasses, reducing the amount of usable electrolyte, which is why they should never be "equalized".
  • Lead-acid batteries can also handle being run completely down - as long as you don't keep them in that state for very long (a few days at most - as little time as possible) and don't do it very often.  In other words, if you run an otherwise healthy lead acid battery completely dead and immediately recharge it, little actual damage is likely to have been done other than taking a bit of life off it farther down the road.  In cold-weather environments, while the degradation (primarily sulfation) is dramatically slowed, extremely deep discharge also reduces the specific gravity, raising the electrolyte's freezing point, increasing the possibility of the battery being damaged/destroyed at very low temperatures if it does freeze.
 With Lithium-ion batteries:
  • If a battery is overcharged, it will start to chemically decompose.  Gross overcharging - while tolerated at least briefly by lead-acid batteries - may result in a lithium-ion battery venting and/or exploding, possibly catching fire.
  • If a battery is over-discharged it will chemically decompose, often with the contained lithium changing into a more volatile - and not useful - state.  Severe over-discharging (e.g. below 2 volts per cell - this voltage varies depending on chemistry) can mean that the battery can never safely be charged again or, in some cases - if the voltage is only allowed to get down to this general area and not lower (again, the voltage varies according to chemistry) some special charging precautions are required (e.g. a specific, very low-rate trickle charging regimen) to "recover" this battery.
  • There are also some specific temperature-related restrictions with lithium-type batteries regarding their use and charge/discharge and these are noted by their respective manufacturers.
Avoiding over-discharge:

The avoidance of overcharging is usually pretty easy:  Just use the appropriate charging system - but over-discharge is a bit more difficult, particularly if the battery packs in question don't have a "protection board" with them.

Lead acid batteries (almost) never come with any sort of over-discharge protection - one must usually rely on the ability of the device being powered (e.g. an inverter) to turn itself off at too-low a voltage and hope that the threshold is sensible for the longevity of a 12 volt battery system.  For low-to-moderate loads (e.g. 1/10th "C" or so) a pretty safe "dead battery" voltage for a 12 volt lead-acid battery is around 11.7 volts - or somewhat higher for heavier loads.  Again, after disconnect, it is not a good idea to keep it in a discharged state for any longer than possible.

Many larger (e.g. >10 amp-hour) lithium-iron phosphate (LiFePO4) do not routinely come with "protection" boards unless it is ordered specially or includes some sort of "Battery Management System":  Batteries in this category can include the "Lead Acid" replacements sold for use with motorcycles and off-road vehicles  and some of the "raw" LiFePO4 batteries available from many vendors, such as the 20 amp-hour modules made by GBS.

While it is also important to equalize LiFePO4 batteries when charging (refer to this post - Lithium Iron Phosphate (LiFePO4) batteries revisited - Equalization of cells - link) the more immediate danger in routine use is accidental over-discharge.

A simple low-voltage disconnect circuit:

Again, for lithium batteries one may install "protection" boards that prevent accidental over-discharge and, in some cases, provide charge equalization - but such things are much rarer for lead-acid batteries, but such a circuit is quite simple and is applicable to either Lithium or Lead Acid batteries.

Figure 2:
Schematic diagram of the low-voltage disconnect circuit.
Not shown is overcurrent protection (e.g. fusing) that should be present on the output of the battery - see text below. 
If desired, LED1 can be placed in series with R2 which could be changed to 2.2k and R7 be omitted as an indicator that the circuit has actually latched, not just that there is voltage present on the Load+/- terminals.
Click on the image for a larger version.

These days it is rather easy to construct a low-voltage disconnect circuit using readily-available components:  The diagram of one such circuit may be found in Figure 1.

How it works:

The RESET button is pressed, applying a positive voltage to the gate of N-channel power MOSFET, Q1, turning it on, which then connects the "BATT -" output terminal to the "LOAD -" terminal.

If the voltage at the "Ref" terminal of U1 is above 2.5 volts, as determined by the voltage divider consisting of R4, R5 and R6, the cathode of U1 is connected to its anode (e.g. "Load -"), pulling the base of Q2 down, making it negative with respect to its emitter, R2 limiting Q2's base current to a safe value and providing enough current for U1 to function, and turning it on.  With Q2 turned on, Q1 is "latched" on, even when the RESET button is released.

If the voltage at the "Ref" terminal of U1, representative of the voltage across the LOAD terminals, drops below 2.5 volts, U1 turns off and the base of Q2 gets pulled positive to the emitter voltage by R1, turning it off.  With Q2 turned off resistor R8 pulls the gate of Q1 down to its source, also turning it off and disconnecting the load.  Because of the "latching" effect, once this has happened the load will never be turned on again until the RESET button is pressed.  This happens because with Q1 turned off, U1 is without voltage (e.g. "Load -" rises to the same voltage as "Load +) and can never turn Q2 (and thus, Q1) back on again.  Even though pressing and holding the RESET button will connect the load even if the voltage is below the threshold, until the voltage rises above the threshold the circuit will not stay "on" once the RESET button is released.

To accommodate a range of voltages, U1's "Ref" terminal is connected across the output (Load +, Load -) with R4 and R6 to "scale" the range of potentiometer R5 to have a threshold in the 8-16 volt range:  Without R4 and R6 the usable range of R5 would be compressed to a very small portion of the overall rotation and make adjustment touchy, but with these resistors setting R5 at mid-rotation yields a threshold of around 11 volts.

Note the presence of capacitors C1 and C2:  C1 provides a bit of filtering of the sampled output voltage to prevent brief current transients that might momentarily drag the voltage down below the threshold, "falsely" causing an undervoltage condition from being detected.  Similarly, C2 slows the "fall" time of Q1's gate voltage, preventing it from shutting off instantly in response to a brief spike of current - and it also provides some degree of protection of Q1's gate in response to possible voltage transients.

While not explicitly tested, the presence of C1 and C2 should provide a modicum of RFI protection:  If your environment includes high RF fields - such as powering a 100 watt amateur transceiver - this could be considered in testing and the construction/layout, knowing that such a transceiver can also impose very brief, high-current loads on the battery can causing momentary brown-outs due to I*R drops in the wiring and battery which could also trip this circuit.

Finally, the combination of R7 and LED1 provide an indication of power-on to the user - see the note on modification of this circuit, below.

Additional circuit notes:

The "high voltage" limitation of this device is primarily that of the gate voltage rating of Q1.  Most power FETs are rated for only +/- 20 volts gate-to-source voltage which means that it is suitable for no more than a "12 volt" bus (e.g. 10-16 volts or so):  If a higher operating voltage is required it will be necessary to add additional circuitry around the FET's gate to keep its voltage safely below its rating.  For an example of such circuitry see this article:  A Simple, effective, yet Inefficient Solar Charge Controller - link and taking note of components D1, R7, R8 and C4 surrounding Q3 in Figure 3 on that page.

If a lower cut-off than 9 volts  (or higher than 15) is required it will be necessary to recalculate the values of R4 and R6 (in Figure 2, above) to appropriately scale the adjustment range.

It should also be noted that if voltages below 10 volts are routinely required one should pay close attention to the saturation (e.g. "full on") gate voltage required for the FET that you plan to use:  Typical FETs do not achieve their lowest resistance until 8-10 voltage of gate-source voltage is present but there are "logic level" FETs available that will be fully "on" at around 5 volts.

Finally, there is a slight modification to the circuit depicted in Figure 2 that could be made:  Place LED2 in series with R2 and decreasing the value of R2 to 2.2k or so, omitting R7 entirely.  This modification not only saves a few milliamps of "on" current, but it also provides an indication of when the circuit is actually latched in its "on" state - particularly useful if the load has its own, separate power source which would cause LED1 to illuminate no matter the state of the disconnect circuit if wired according to Figure 2.


None of the components are critical, save the possible exception of R4, R5 and R6 which are selected to scale the adjustment range of R5:  While it is the ratios of these components that are important (e.g. one could use 4.7k, 1k and 1k for R4, R5 and R6, respectively) going much higher than the stated values may violate the minimum reference current specifications of U1 resulting in temperature/device variations of the set voltage thresholds.

The TL431 (U1) is a rather ubiquitous chip, found in practically every PC-type power supply made in recent years and is available in single quantities for well under $1.

Q2 may be practically any silicon PNP transistor with a rating of at least 30 volts while Q1 may be any N-channel MOSFET with a voltage rating of at least 30 volts and a current rating of at least 3 times the current that you plan to draw and an "ON" resistance of a fraction of an ohm.  For the prototype I used an F15N05 FET - a 15 amp, 50 volt device, more than adequate for the 3 amp load that was to be used, but one could use as "large" a power FET as you wish.  For "12 volt" operation make sure that the FET that you choose has at least a 20 volt gate-source voltage rating.  Higher-current FETs include the IRFZ44 (50 amp max.) and the PSMN2R7-30PL (100 amp max.) to name but two out of hundreds of possibilities.  If even more current is required one can parallel multiples of the same-type FET as needed, potentially providing many hundreds of amps of capacity, provided the wiring is appropriately considered.

Device layout is not critical aside from the use of appropriately heavy conductors to the source and drain leads of Q1 to carry the current.  For most applications a heat sink is not even required for the FET - particularly if one chooses a device with milli-Ohm range "on" resistance but there is never any harm in doing the calculations yourself to verify that this is true in your case with the FET that you choose.  Note that the "Batt+" and "Load+" lead is straight-through and the wire connecting this circuit to that "through" connection may be of light gauge:  The only caveat is that it is recommended that the connection to this circuit be connected closer to the "BATT+" terminal than the "LOAD+" terminal to minimize the resistance of that connecting wire which could cause the circuit to sense a slightly lower voltage than is actually present.

Finally, note that this circuit works by disconnecting the "BATT-" and "LOAD-":  Your battery's negative terminal must be completely isolated from the load for this circuit to work properly and protect your battery!

(Comment:  It is possible to reconfigure this circuit to disconnect in the positive lead, but this requires the use of a P-channel power FET:  A not-yet-built or tested circuit design is available on request.)

Adjustment and Operation:

For proper set-up an adjustable power supply is required and the procedure is as follows:
  • Set the power supply to a volt or two higher than the desired drop-out voltage.
  • Adjust R5, the potentiometer so that the wiper is closest to R6 to set the drop-out voltage to maximum (e.g. highest voltage measured betweenU1's REF terminal and LOAD- while the RESET button is being pressed).
  • Connect the device to the power supply using the BATT- and BATT+ connections.  No load is required for testing.
  • Press and release the RESET button:  The LED should stay on, but if not, check the adjustment of R5 to verify that it is providing the maximum voltage to U1's REF terminal.  If this checks out, check for proper resistor values of R4, R5 and R6 as well as proper wiring of U1.  Note that the circuit will not stay on if U1's REF terminal is below 2.5 volts.
  • Lower the power supply to the desired drop-out voltage.  The LED should stay on, but if not check the setting of R5.  (Remember that the useful range of R5 with the specified values of R4 and R6 is in the 8-16 volt area.)
  • Slowly rotate R5 until the LED just turns off.
  • Increase the power supply voltage slightly, press and release the RESET button and verify that the LED turns on and then goes off again when the voltage drops below the threshold, repeating the above steps as needed.
If a device is connected that has a high "starting" current it is possible that - particularly if the battery is weak or near the cut-off voltage and/or the cut-off device is located at the end of a long run of rather small-gauge wiring - it will drop-out before the voltage gets to the pre-set threshold.  If this happens and it is not practical to move the device closer to the battery or increase wire size to minimize lead resistance one can increase the value of C1 (to as much as 47uF) to slow the response time, allowing a momentary "brown out" to occur without tripping the device.  Note that with such a capacitor it will take longer to respond to such changes, but this should not be an issue from the viewpoint of protecting the battery.  The value of C2 can also be increased, but not much more than 1 uF should be used as this will excessively slow the "turn off" time of Q1, causing it to spend more time out of saturation and potentially dissipating more heat in the process.

Additional comments:

In this particular application Anderson Power Pole (tm) connectors were used on the input and outputs allowing this device to be easily removed from the circuit and configured as needed.

This device should also not be left connected to a battery in long-term storage as it draws several milliamps when it is in its "ON" state due to the LED and the current consumption of Q1/Q2 and associated resistors, R4-R6 and U1.  When in its "OFF" state its current consumption is negligible (likely in the nanoamp range) so if it is left connected and the battery gets drawn down, it will still do its job, disconnecting the load - and itself - from the battery and protecting it.  Note that if the load is "back-fed" from another source - say an AC/solar charger or power supply - and the voltage rises above the threshold, this will have the same effect as pressing the RESET button, turning the circuit on.

It is recommended that one NOT attempt to charge the battery "through" this device - at least at higher currents:  In theory it should work, but the current will flow backwards through the FET.  The reason for this is that while a FET that is turned "off" has an intrinsic "backwards" diode, it will drop 0.5-0.8 volts across this diode causing the FET to dissipate far more power than it would if it were actually "on".  If the charge rate is limited to a rather low current - perhaps less than 3-5 amps - the amount of heat dissipated by the FET should be tolerable.  Until the voltage rises above the cut-off threshold the FET will exhibit this 0.5-0.8 volt drop, but above this - when the circuit turns the FET on - this diode drop will largely disappear.   If you do this it would be a good idea to test it at your intended charge current in the worst-case scenario (e.g. highest current and adjust R5 so that the circuit will not trigger "on" during this charge, forcing the "diode drop" across Q1 to exist) and note if additional heat-sinking of U1 is needed.  Note:  If this is done, the "LED1-R2" modification noted above is recommended so that the LED will properly show the state of the circuit.

Not shown - but recommended - is the use of some sort of fuse or other overcurrent protection on the output of the battery.  It is recommended that the fuse rating be no higher than a third of the current rating of the FET to increase the chance that the FET will survive the surge current required to blow the fuse in the event of a dead short on the output.