Thursday, September 13, 2012

Two repeaters, One frequency (Part 3)

For a follow-on article in this series, see Part Four (link)  for a discussion of how the voting receiver system works.

In parts One and Two the general overview of a "synchronous" (or "simulcasting") and voting repeater system was discussed.  In a nutshell:
  • Both repeaters operate on the same frequency saving spectrum and simplifying the system's use since the user doesn't have to remember which particular frequency of a "normal" linked system covers a certain area best.
  • The coverage of the two repeaters overlaps to a degree.
  • Because of precise frequency control, the two transmitters don't really clobber each other in overlap areas, particularly in a moving vehicle.
  • Because of voting receivers and multiple transmitters, the users can seamlessly move between coverage areas with no intervention on their part.
  • The total coverage is greater than the sum of the parts owing to the increased likelihood of one or another site hearing the user and/or being heard - particularly if in an area where coverage is spotty to one or both sites.
 Originally (back in the late 90's) the idea was to frequency-convert the received signals from the 2-meter frequency to a subcarrier-baseband and send them to the main site where they could be voted upon and then a master modulator would then ship back (via a microwave link) a subcarrier which was then up-converted to the transmitter frequency.

The details were worked out and some of the equipment was actually built and tested - and it worked!  However, the magnitude of the task bogged things down and one thing led to another and the project languished - until 2009.

By then I'd already put together 2 voting systems and one multi-transmitter synchronous system (using GPS frequency references) and had other ideas on how to do things a bit more simply which translated to "being more likely to get completed!" The project got underway in earnest in mid-July of 2009 where the plans were re-draw and tasks divided as appropriate.

Instead of building the transmit and receive gear from the ground up it was, instead, decided to modify off-the-shelf GE MastrII radio gear to fit the bill.  This equipment is readily available on the surplus market and the individual pieces could be used with little or no modification - which meant that spares of those same pieces (receiver, transmitter, power amplifier, etc.) could be kept on hand as spares!  What's more, for the most part these units used common, off-the-shelf parts (resistors, capacitors, transistors) and were thus field-repairable now and for the foreseeable future.  Finally, a lot of information is available on these radios on the web so if, in the future some trouble shooting is required, there's plenty of advice to be had online.

What modifications were required to the radios were fairly simple:
  • Instead of a standard crystal module (called an "ICOM" by GE) a simple, plug-in module (using a "gutted" ICOM) was plugged into the exciter instead.  This was connected via coax to an external module that provided the low frequency (at 1/12th of the transmit frequency) at the precise frequency.
  • Transmit audio was fed into the subaudible tone input port.  This was done because it did not have the highpass and lowpass filters that the normal microphone inputs had:  We would do the high/low pass filtering externally!
  • The receiver modification (for the Scott's Hill site) simply involved obtaining discriminator audio.
There were some additional modifications done to provide interfacing to the rest of the system - namely an outboard de-emphasis, a low-pass filter and a switchable notch filter (for a "quirk" we later discovered) but these were mounted on the backplane - a more-or-less passive board that would likely never require replacement!  Pretty much everything else was "stock" and could be tuned up and adjusted according to the original manuals!

Transmit frequency control:

The most important aspect of a multi-transmitter (simulcasting) repeater system is that the transmitters be where they are supposed to be, frequency-wise!  While there are several ways of doing this, we took a somewhat unique approach.

A standard transmit crystal (at 1/12th of the VHF transmit frequency) was ordered and placed into an "EC" type ICOM.  This is, in effect, a self-contained oscillator module that has provisions to be frequency-controlled with an external voltage.  This module is completely standard and off-the-shelf and it could be plugged into any GE MastrII VHF transmitter and work normally.

In our case, however, this "EC" module is plugged an external module - called a "Disciplined Oscillator" - that takes the crystal frequency (which is 12.2183333 MHz for a 146.620 MHz transmit frequency) and locks it to a reference based on a 10.0 MHz oven-controlled crystal oscillator.  This is done by synthesizing an audio frequency, using a PIC microcontroller clocked to the 10 MHz oscillator, that has a resolution of a few parts per billion and with a bit of dividing, mixing and comparison has the result of locking the 12.2183333 MHz oscillator to the 10 MHz reference to within a tiny fraction of a Hz.  Essentially, the frequency accuracy is that of the 10 MHz oscillator!

The 10 MHz oscillator is an oven-controlled crystal oscillator (OCXO) pulled from scrapped satellite gear and is well-aged (made in about 1990) and has a stability of about 10E-8 - within 1 Hz or so at the 2 meter transmit frequency.  This OCXO also has an external voltage control tuning line that is under control of the PIC microcontroller and with it, the 10 MHz frequency (and thus the transmit frequency) can be tweaked to set each transmitter on the desired frequency - which also means that the frequency difference between the two transmitters may be precisely controlled.  In the nearly 3 years since the system was made operational we've observed that the transmitters have stayed within about 1 Hz of their intended frequencies relative to each other over the course of the temperature excursions during the year!

This "Disciplined Oscillator" module also has another function that, since it was computer-based, was easy to implement, and that's as a simple dual cross-band repeater.  On Scott's (the remote site) it simply cross-bands the 2 meter receiver to the 70cm link transmitter - taking care of thinks like proper IDing, timeout timers, etc. and it also takes the 70cm link receiver and controls the 2 meter transmitter coming back the other direction:  Both operate independently of each other...

Squelch control and voting:

It also does one more thing:  COS (Carrier Operated Squelch) signalling.  The 146.620 repeater is one of the few repeaters in the area that does not have a subaudible tone requirement, this being because it's an "open" repeater and that extreme care is taken at all receiver sites to keep the receive frequency as clean as possible - a task that is arguably easier since the demise of analog television in the U.S.!

Since the Scott's Hill transmissions are relay to/from the master site via a UHF link there would be an extra squelch tail (the "ker" in "ker-chunk") if the loss of a signal at Scott's were signalled simply by its UHF transmitter being keyed/unkeyed.  Instead, the loss of squelch is signalled by the appearance of a strong, 3.2 kHz tone sent over the link which performs two functions:
  • It signals to a decoder at the master site that the squelch as closed at the other end.
  • It signals to the voting controller at the master site that the signal being received is "bad" and should NOT be used.
(This 3.2 kHz tone is "notched" out and its brief appearances in the system audio are not heard by the users.)

So, what happens if a user's signal into Scott's is dropping rapidly in and out?  As the squelch opens and closes, the tone is turned off and on (tone on = squelch closed/signal dropout.)  When the tone is turned on the voter disqualifies this tone, but if that same user is getting into the other receiver (at the master site) then this tone will guarantee that the signal from that receiver will be used, instead.

There is a short "hang time" on the link transmitter which means that when an input signal disappears from the 2 meter receiver at Scott's, the tone will turn on instantly, making the master site ignore the input, and then the UHF link transmitter signal will drop and in this way, the extra squelch tail from the UHF link transmitter dropping is never heard by the user.

As it happens, the signals coming the other way (from the master site to Scott's over the UHF link to be retransmitted on 2 meters) also use this 3.2 kHz tone - this time, to control the Scott's VHF transmitter.  In this case, however, the activation of the tone starts the "Unkey" sequence at Scott's allowing time for the disciplined oscillator to put an extra "beep" on the transmitter (so that users know which transmitter they are hearing!) and then unkey the VHF transmitter.

Since the 3.2 kHz tone being sent to Scott's occurs just before the master site's 2 meter transmitter unkeys, it's possible to set the timing so that both sites unkey at precisely the same time:  If both site's didn't unkey simultaneously, many users would be annoyed by the presence of an extra squelch tail if they could, in fact, hear the "other" transmitter hanging in there for a short time!

At the master site:

As it turns out, the master site's interfacing a was bit easier... sort of...  This repeater's master site is actually split, with the receiver and antenna being several hundred feet away from the transmitter, this being done to put it farther away from the megawatt of RF being emitted from all of the TV and Radio transmitters on the main site!  It is connected via transformer-coupled cables (for lightning protection) and has operated with minimum maintenance since the early 1980's.

Since we already had on-hand local COS (squelch) and audio from the receiver, there was no need for the tone signalling schemes of the remote site but, instead, the audio and COS lines could be input to the voter.  There was a problem:  The "local" receive audio was "too" good!

The way the voter works is that it analyzes mostly the audio above 2.5 kHz and of the receivers being compared, it is the receiver with the MOST audio above 2.5 kHz that is considered as being the one with the worst signal.  The reason for this is pretty simple:  As an FM signal gets weaker, it gets noisier, so it stands to reason that given two otherwise identical signals, the one that is also noisier will have a total signal level that is higher - particularly at higher audio frequencies.

The problem was that the audio from Scott's had already passed through a radio link which tended to scrape off the audio above about 3.5 kHz or so while the "local" audio, being coupled via wire, had no such low-pass filtering, so we had to add some.  What was happening is that the "local" audio - with its additional "highs" (as compared to the audio from Scott's) was being considered to be "bad".  By removing those extra high-frequency components and making the two audio signals pretty much equal we were able to make the more-or-less directly comparable.

Next time -Part Four:  A bit more about the voting controller and some of the remote control/monitoring capabilities.

[End]

This page stolen from ka7oei.blogspot.com

Wednesday, September 5, 2012

Voice on a laser beam...


Sending voice over light is nothing new.  The first wireless voice communications system - using light - was the PhotoPhone, demonstrated in 1878 by Alexander Graham Bell - a full 25 years before Fessenden demonstrated the same feat using radio waves.  To be sure, optical communications has certain practical limitations, namely the blinding presence of the sun and the occasional opacity of the atmosphere due to weather, but it's still a fascinating and fun topic of discussion.

I'm one of those people who find wireless communications of any sort to be interesting and I have a particularly keen fascination with optical wireless communications - that is, using "radio waves" that I can see with my own eyes.

For short-range experimentation it's hard to beat a cheap laser pointer - and here is a bit of info on how one might go about this.

Modulating the laser pointer:

The laser pointer consists of a laser diode and like any diode, it has a maximum current rating that should be observed with more caution than its voltage.  What this means is that you cannot connect a laser diode to any sort of battery and expect it to work properly:  Too little voltage and it won't lase while too much voltage, it will never lase again!  What is needed is a simple circuit that limits the amount of current fed into the laser diode to a safe level and fortunately, cheap laser pointers always have something that does this.

Increasingly, cheap laser pointers simply rely on a combination of a simple circuit (or even a single resistor!) and the internal resistance of the battery powering it to keep the laser current at a safe level and since a laser pointer already has the necessary parts, why not use them?

In my opinion, one mistake that I often see on web pages that describe the modulation of a laser pointer is to attempt to modulate by varying the voltage/current of operation - typically using a transformer in series with the power source.  There are several things wrong with this:
  • It's not certain how far down in current one can go before the laser drops out of its "laser" mode or how high one can go before it gets "blowed up."
  • Laser current versus output isn't terribly linear which means that distortion can occur.
  • With the min/max current uncertainty, one can't fully modulate the laser's output safely which means that the audio on the beam will be somewhat "quiet" - something that reduces the efficacy of the link!
The better way to modulate a laser is to simply turn it on and off using Pulse With Modulation (PWM)  and taking advantage of the circuit already present to safely operate the laser from its intended power source - say, a pair of AAA cells (or 3.0 volts.)  While more complicated than simply putting the laser's power supply in series with a transformer, it's pretty much bulletproof and can sound pretty darn good!
 
Figure 1: Laser transmitter/receiver by K7RJ.
For a diagram of this unit, see Figure 4 at the bottom of this page.
Click on the image for a larger version.

A simple circuit to do this may be found in the diagram in figure 4 at the bottom of the page..

I won't take credit for this circuit which was thrown together by my friend Ron, K7RJ.  When built, this circuit was intended to be quick and easy and high performance was NOT in mind - just enough effort was put into it to make it work for demonstration purposes.

Contained within the diagram is enough information to connect your cheap, 3-volt laser pointer - just be sure to pay close attention to its positive and negative battery connections when you take it apart!

Also contained in this diagram is a very simple, low-performance receiver intended solely for across-the-room (or across-the-parking lot) testing of the transmitter to make sure that it works.  It should be emphasized that this receiver is not at all intended for longer-distance use - say, more than a few hundred meters at most, and its performance can be spectacularly enhanced with the careful installation of a small magnifying glass lens with the phototransistor at its focus.  Even when enhanced thusly, other optical receiver circuits will still run circles around it!  A link to a web page describing a far more sensitive circuit may be found at the bottom of this page.

At this point I'll make a few comments about laser safety and legality:
  • Make certain that your "laser range" is end-stopped - that is, when the beam goes beyond the receiver it does not cross a road or have any likelihood of being intercepted by aircraft in flight or landing/taking off where they can dazzle and distract!  In other words, the receive end should be against the side of a building or hill.
  • While cheap, red laser pointers are probably too weak to cause permanent eye damage, it's best not to stare into it or point it directly at people!  A standard, cheap red laser pointer will, at its worst, probably just dazzle and maybe cause a brief headache or eye pain as well as a temporary loss of night vision.  The farther you are away from it, the less dangerous it will be.
  • In some states and areas laser pointers are highly regulated or even illegal - including some U.S. and Australian states/localities - check your local listings!
  • It is NOT recommended that any but cheap, red laser pointers be used for this purpose.  Why?  First, they are the cheapest and secondly, they are fairly safe and low power.  It's also worth considering that typical electronic detectors respond far better (e.g. are more sensitive) to red light than green or blue - not to mention there being less atmospheric attenuation at "red" wavelengths!  Some of these "fancier" laser pointers of other colors have electronic circuits in them that can prevent them from being modulated effectively.

Figure 2:
Cheap laser pointer on a tripod
Click on the image for a larger version.
One thing that you'll immediately notice about laser pointers is that despite their name, they can be fiendishly difficult to aim them - particularly as the distance increases!  For this reason it's best to contrive a means by which a camera tripod can be used to hold a laser pointer - but even this can be tricky since even a fairly expensive tripod is quite "touchy"!  To the left you can see Ron's laser pointer mount with the pointer module itself being contained within a cheap project box from Radio Shack and connected by a short cable to the rest of the circuitry.

This brings up another point as well:  Do not put both the modulator electronics and your laser pointer in the same box.  By connecting them with a cable you will be able to make adjustments and turn the thing on and off without touching the tripod and possibly disturbing your carefully-aimed beam!

Another example of a laser pointer modified for such use may be seen on the right.  When I got this pointer I couldn't see how I could remove the laser module without the possibility of damaging it so I simply used it as-is:  A wooden dowel, the same diameter as AAA cells was used and at the inside end of the dowel was a small screw to which the minus (-) side connection was made.  The connection to this screw was made via a wire laid in a shallow groove along the length of the dowel and the positive (+) side was connected to the case of the pointer itself by using some copper foil wrapped around the end of the dowel opposite the screw.  The dowel was tack-glued into place, pushing against the internal battery spring and the laser's "on" button was simply taped down.  The entire pointer was then "hot-glued" to a cheap project box that itself has inside it a 1/4"-20 bolt glued into place to allow attaching to a tripod mount while electrically insulating the laser pointer's positively-connected case from the tripod.

Figure 3:
Minimally-modified pointer on a tripod mount.  This just happens to
be mounted atop an 8" astronomical telescope (a Celestron C8)
  with a equatorial mount which allows
precise aiming - and it also includes a telescope!
Click on the image for a larger version.
A lot has been glossed over in this brief article - namely techniques about how to accomplish a laser communication over longer distances including links to descriptions of higher-performance gear and methods of precisely aiming - and if you are really interested, you can take a look at my page:

 "Using Laser Pointers for Voice Communications" (see the link below) for a lot more detail than can be covered here.

How far can a lowly laser pointer go?

Under clear-air conditions on a line-of-sight path and using the very same lasers pictured above I've had a 2-way laser pointer to laser pointer communications on a 107 mile (173 km) path with fairly good signals.  This was, of course, using high-performance receivers with orders of magnitudes better sensitivity than the one shown it Figures 1 and 4 on this page!  In the "Using Laser Pointers..." link just above one can even find additional links to actual "off the air" recordings made via long-distance laser-pointer communications systems.

There are other problems with using lasers over distance, however, namely that of scintillation - the rapid fading or "twinkling" caused by the irregularities in the atmosphere.  While this affects all types of light sources the combination of the coherent laser light and the small diameter of the beam as it exits the laser greatly exacerbates the problem - but that's a topic for another article!

Links from the "Modulated Light" (link) web site:
Figure 4:
Schematic diagrams of a simple (but deaf) receiver for testing and a simple PWM laser/LED transmitter, described
in the text above.  This unit was designed by Ron, K7RJ and is shown in Figure 1, above.
Click on the image for a larger version.
[End]

This page stolen from ka7oei.blogspot.com

Friday, August 31, 2012

Problems with Lithium Iron Phosphate (LiFePO4) Batteries

Update:
For an update about what turned out to be happening with these batteries - and one possible solution - see the May 18, 2013 post, "Lithium Iron Phosphate batteries revisited - Equalization of cells" - link


In 2010, About 2 5 years ago over the period of several months, I got three 13 volt, 6+ amp-hour Lithium Iron Phosphate (LiFePO4) packs from Batteryspace.com for about $95 each.  These packs seemed to be a reasonable alternative to my old standby of portable battery power - the ubiquitous 12 volt, 7 amp-hour sealed lead-acid (SLA) battery, often (mistakenly) called "Gel Cells."

Why switch from SLAs?
The three LiFePO4 battery packs in question.

LiFePO4 packs seemed to be attractive for the following reasons:
  • LiFePO4's were lighter than the same-capacity SLAs - roughly 1/2-2/3 as much weight.
  • Claimed 1000-2000 charge durability for LiFePO4's versus 100-200 or so for SLAs.
  • Claimed 10 year lifetime for the LiFePO4's versus 3-5 years for SLAs and conventional Lithium-Ion packs (even the polymer types.)
  • With all of the above, the relatively high initial cost ($95) of the LiFePO4 batteries that would last 10 years seemed to be reasonably comparable to $15-$30 (when new) on a "per-year" basis with typical 7 amp-hour Lead-Acid packs - and the lighter weight was a plus!
As it turns out the LiFePO4 packs aren't quite as "energy dense" as  "normal" Lithium Ion cells - that is, when cylindrical LiFePO4 cells are assembled in a battery pack it takes about the same amount of space as a lead acid battery of the same capacity - but they weigh much less.  It's also worth remembering that conventional LiIon packs will typically last 3-5 years from the date of manufacture and thus didn't have much longevity advantage in that respect over SLAs.

So, over the period of several months, I ordered three of these 6.2 amp-hour LiFePO4 packs that put out about 13-ish volts over their discharge cycle - slightly higher than SLAs, but still well within the realm of what typical "12 volt" gear will accommodate.

As I typically do with newly-acquired batteries I checked the amp-hour capacity of each of the three battery packs shortly after arrival using my West Mountain Radio Computerized Battery Analyzer at 700 milliamps and found that they were reasonably close to the advertised capacity - that is, around 5.8 amp hours:  Typically such batteries are rated at the "20 hour" rate which would have been about 310 milliamps and the higher rate that I used would reduce the measurement by 10-20% so I was pleased with the results.

At about the same time I acquired some 2 year-old 12 volt, 7 amp-hour lead-acid batteries that had been pulled from UPS service on a routine basis and these were found to have about 6.2-6.5 amp-hour capacity at the same 700 milliamp rate.

In the intervening years I used these batteries (both LiFePO4 and SLA) about equally, running radio equipment and the like and earlier this year I suddenly realized that something was amiss:  The LiFePO4 packs were dropping out far earlier than they should have.

A bit of explanation here:

All rechargeable lithium-ion packs (should!) have built-in circuitry to protect against excess over-discharge, the reason being that if you run a lithium battery down too far an irreversible chemical change occurs and they cannot be safely recharged ever again.  For this reason when a lithium pack runs down too far it will suddenly drop off, the internal circuit disconnecting the battery to protect it.

Lead Acid packs, on the other hand, do not do this:  Their voltage slowly drops down and their effective internal resistance goes up and one eventually realizes that the equipment being powered is no longer working correctly.  (Note:  This ignores longer-term permanent damage from sulfation that will occur if a lead-acid cell remains discharged for a long time.)

As it turns out both Lithium-Ion and Lead-Acid packs are charged in similar ways.  One simply connects a power supply of voltage appropriate for the type of battery pack and let it charge.  Both types of batteries, when discharged, will pull more charge current but this will gradually drop off as the battery approaches full charge and for this reason it's typical for these power supplies to be current-limited as well as be fixed voltage.

A major difference between how one treats Lithium-Ion (including LiFePO4) and Lead-Acid (SLA) batteries appears at the point of full charge:
  • For SLAs one obtains the best lifetime by continuously maintaining them at a constant voltage - typically 13.5-13.8 volts for a "12 volt" lead acid battery
  • Lithium types should not be maintained at the "full charge" voltage after full charge has been achieved.

What happens with Lithium-Ion batteries (including LiFePO4) is that if you maintain the "full charge" voltage its internal chemistry degrades much more rapidly than if you were to fully-charge the battery and then immediately disconnect the source, allowing the voltage to sink down a bit on its own.

What this means is that you will get much better longevity out of a Lithium pack if you do not keep a high-level float charge on it.  In fact, the best longevity of Lithium-type rechargeable batteries can be obtained if you store them in a half-discharged state - provided that you check once in a while to verify that their self-discharge hasn't caused their voltage to go so low that they become damaged from that!

* * *

That is how I treated the LiFePO4 battieries:  I would attach the pack to a 1-amp, regulated 14.2 volt 1.5 amp power supply for 12-18 hours and then disconnect it and then place it on the shelf, possibly topping it off briefly just before using it.  The Lead-Acid batteries, on the other hand, are left connected to a 13.6 volt power supply and allowed to sit there all of the time when not being used.

I was, therefore, chagrined when after just two years the now 4 year-old SLAs were outlasting my LiFePO4 packs.

This observation spawned some further testing, so I put the LiFePO4 packs back on my battery tester I was further distressed to note that those that had originally tested out as having 5.8-6 amp hour capacity were now, at the very most, in the 1.5-2 amp-hour range while the much older SLAs were still in the 5.0+ amp-hour range.

Comment:
 In the time since I did the testing for this entry, the LiFePO4 packs have continued to degrade at about the same, alarming rate while the old Lead-Acid cells are still holding in, degrading much more slowly.

Hmmm...

So, what's the deal?  Why are the 4+ year old SLAs still in better shape than the 2 year old LiFePO4 packs?

I really don't know.  I've attempted to correspond with the sellers of the LiFePO4 batteries (batteryspace.com) to find out their "take" on this observation, but I've not heard back from them - too bad since I've had reasonable luck with their customer service in the past...

Perhaps they got a batch of "bad" cells - but since the three LiFePO4 packs were actually purchased several months apart it would seem to me that it's more a problem with manufacture/chemistry of the cells themselves. 

What to do?

At the moment I'm sticking with the old, heavy SLAs since I'm now understandably "gun shy" when it comes to LiFePO4s since the former do seem to be fairly predictable in their longevity and performance - at least when treated properly!

Update:


For an update about what turned out to be happening with these batteries - and one possible solution - see the May 18, 2013 post, "Lithium Iron Phosphate batteries revisited - Equalization of cells" - link

Update on battery longevity (June, 2016):

I recently re-tested the three batteries depicted above and found that their capacity ranged between 4.8 and 5.4 amps-hours - this for batteries that were at least six years old.  Based on their capacity when they were new, they have lost somewhere around 20% of their original capacity in that time.

While I'm a bit skeptical that they will make it to the 10 or 20 year mark, it is worth noting that practically any lead-acid battery of this same age would have since been relegated to the recycler!

[End]

This page stolen from ka7oei.blogspot.com

Tuesday, August 14, 2012

A more practical capacitor-Powered Flashlight

In an earlier post, "A mechanically-powered capacitor flashlight" I wrote about those cheap LED-based "shake-powered" flashlights that were seen on many an annoying commercial several years ago.

You might recall that their promise was that they would never need batteries and one simply shook them back-and-forth to generate all the power that was needed.  In that same post I also noted that many of these same flashlights actually did contain batteries and that while they still worked if those batteries were removed, it took several minutes of shaking to get any usable light and that it was quite an effort to maintain a useful light output!

At the end of this article, I mentioned a few things that might make such a light more practical and useful, including:
  • A better capacitor.  The cheap flashlight had a rather small (0.22 Farad) capacitor for energy storage - not very much energy, really, approximately 6.6 Joules maximum or less than 1/1000th of what a single AA alkaline cell contains!  Being a standard "super cap" its internal resistance was quite high (10's of ohms) which meant that a large percentage of the energy dumped into it during charging and that extracted from it to run the LED was lost as heat - not much heat, but heat just the same.
  • A switching converter to run the LED.  The LED didn't even begin to light until 2.7-3.0 volts or so appeared across the capacitor and it isn't usefully bright until there is 3.6-4.2 volts available which meant that a significant portion of the energy in the capacitor (all of that at voltages of 3-ish volts and below) was unusable.  A simple switching converter would allow both extraction of that additional energy as well as regulate the LED's current so that its brightness was more consistent over the entire charge range and, in theory, could also be adjusted upwards or downwards as necessary.  The efficacy of trying this with a capacitor of high internal resistance would probably be dubious...
One of the conclusions in this earlier article was that the back-and-forth shaking motion wasn't a very efficient means of generating electricity - both in terms of expended muscle energy (since you have to move and stop the entire mass of your arm!) and compared to a conventional crank-type generator - and it would necessarily be larger and heavier in order to be more efficient.  By using a conventional spinning generator and gearing up rotational speed, one can more-efficiently rotate a smaller magnet faster amongst a larger number of poles with a motion that requires less human effort.  What's more, a crank-type generator is quite "scalable" in its input:  You could crank it fairly gently for a long time or do so vigorously for a shorter time and get roughly comparable results in terms of total energy output - within reason, of course.
Figure 1:
The prototype capacitor-based flashlight using a Maxwell Energy
2600 Farad, 2.5 volt "Boostcap".
Click on the image for a larger version.

What is more likely in most situations is that one actually has a source of power somewhere (an already-charged battery, solar panels, a plug-in power supply, etc.) that can be used to charge the flashlight and that it's unnecessary to actually bring along the means of charging the battery with you.

Such devices are already available in the form of batteries, particularly rechargeables, so having a capacitor-powered rechargable flashlight is more of an intellectual exercise rather than one of practicality, but being practical has not always been much of a deterrent to the experimenting nerd!

Some time ago The Electronic Goldmine in Arizona had a large quantity of  Maxwell BCAP0010 BoostCaps tm* available. These were obtained for just $6 each had a rated capacity of 2600 Farads (yes, that's 2.6 kF or kiloFarads!) at 2.5 volts with a "surge voltage" of 2.8 volts - whatever that means...

Comment:  I noted that at other times they had models that were rated at around 3 kiloFarads at 2.7 volts, but these were sold for far more than $6 each.  Alas, as is the nature of surplus, the supply was limited and they sold out fairly quickly.  Sometimes these types of capacitors will show up elsewhere on the surplus market so if you want some, it would pay to look around!

Compared with the 0.22F capacitor in the original flashlight, these units have 10000+ times larger capacity (albeit lower voltage) and very low internal resistance - in the milliohm area - as their intended use was to provide a large burst of current for a short time, say, on an electric vehicle.

To demonstrate, I charged one of these capacitors to 2.5 volts and then I carefully shorted out the terminals with a length of #14 AWG bare copper wire, holding it in pliers.  Within a second or so the current from the capacitor had burned this wire open and in so-doing, it only lost about 0.1-0.15 volts!  For these particular capacitors the maximum rated current is on the order of 600 amps so I have no doubt that I could have repeated the same trick (not recommended!) with larger gauge wire!

What this means is that resistive losses of this type of capacitor (e.g. a "Boostcap") are negligible when it comes to its being charged by a power source and then being discharged by an LED.  As an example, let's assume that we need to draw 100 milliamps to run our hypothetical LED circuit from two different types of capacitors:
  • A standard "supercap" with an internal resistance of 10 ohms - an nice, round value, typical of these types of capacitor.
  • A "boostcap" power system with an internal resistance of 100 milliohms - that value being mostly that of thin wires connecting to the capacitor:  The capacitor itself would likely have an internal resistance a fraction of this!
If we take the formula:  P = I^2 R (that is, power equals the square of the current multiplied by the resistance) with the resistance values above and assuming an LED current of 100 milliamps - and ignoring other losses we get:
  • A loss of 100 milliwatts from a standard super cap.
  • A loss of 1 milliwatt from the "boostcap" and its connecting wires.
Now, if that LED were running from, say, 2 volts at 100 milliamps, the total LED power in each case would be 200 milliwatts - but you can see that the super cap would be losing 100 milliwatts of that in heat while the boost cap would be losing just 1 milliwatt - a considerable difference!  (This assumes that we are somehow ignoring the power loss of the resistance when we are running our LED...)

Clearly, the use of a boostcap offers superior efficiency when discharging, but it also works in reverse:  One could dump many amps into the capacitor (if you used thicker connecting wire) and charge it very quickly and efficiently.

We still have the problem of running the LED, however.  The boostcap capacitors that I obtained were designed to be charged to just 2.5 volts or so and this is too low to run a standard white LED, which needs 3.6-4.2 volts just to light up brightly, so an electronic boost circuit is required and this was accomplished using a variation of the ubiquitous "Joule Thief" circuit:
Figure 2:
Schematic of the flashlight.  This diagram includes a "blocking oscillator" (a.k.a. "Joule Thief") and a current sensing circuit.
See the text for recommendations on transistors to use for Q1.
Click on the image for a larger version.

Important Note:
  • This discussion assumes that one is using an LED with a 3.6 volt threshold as is typical for most white and blue LEDs.  LEDs with lower voltages (e.g. typical red or yellow that operate in the 1.6-2.5 volt region) can't be used with this circuit because their operational voltage would be below that of the full-charge voltage of the capacitor and would be immediately destroyed by the current, from the capacitor, through T1.

While there are more efficient circuits out there, there are almost none that are simpler than the Joule Thief and adaptable to parts that might be found in scrounging around the junk box.

What I came up with is the circuit in the diagram.  At it's heart (Q1, T1, R1, LED1)  it is the Joule Thief circuit comprising a "Blocking Oscillator (link)" that, using inductive "kick" from T1, will produce a voltage higher than that of the power supply (our capacitor), sufficient to light the LED.

While the simplest version of the circuit using the aforementioned components did work, it was very bright at the higher capacitor voltage (above, say, 1.8 volts) but it got noticeably dimmer - but still useful - at lower voltages.  Since the intent was to provide a "useful" amount of light I decided that I didn't need "maximum brightness" at the higher voltages and that I'd be happy with a much dimmer - but consistent - brightness at a much wider range of capacitor voltages.  This also had the obvious and beneficial side-effect of allowing a much longer run-time since, overall, the power consumption was reduced to a fairly steady level over the entire voltage range.

To regulate the LED current a simple circuit was added consisting of T2, D1, R2, R3, C2 and Q2.  The way this circuit works is that the AC current through the LED goes through the primary of T2 and is then integrated by D1, R3 and C2 and if this resulting voltage is too high (correlating with higher average LED current) Q2 would conduct, "pinching" off the drive to Q1.

Originally, a circuit consisting simply of a series resistor along with a transistor like Q2 was tried in which the current through the resistor - if it exceeded the 0.6 volts required to turn on the transistor - would be used to turn off the oscillator and regulate it, but this added resistor required that a bit of the LED's current to be lost as heat through it - plus, it just didn't work very well!

Using a simple transformer arrangement to "transduce" the current into voltage reduced the efficiency losses that occurred with a series resistor while still being fairly simple.  Being simple also meant that there was still a fair amount (say, 25% or so) of LED brightness variation between the target 1.1-2.5 volt range, but that was considered to be acceptable for a simple circuit.  This circuit is also somewhat affected by temperature owing to the fact that not only do the various current gains of the transistors change, but so do the threshold voltage of the transistors and D1.

In this circuit there's really only one critical component and that's Q1, an NPN transistor that was specifically designed for use in photoflash inverters and as such it can switch several amps of current with low collector-emitter drop, this rating being several times that of the more ubiquitous 2N3904 or equivalent.  While a standard NPN like the '3904 will work, it will not work very as well and will be much less efficient.  The KSD5041 may be bought from Mouser Electronics, substituted with a 2SC695, an NTE11 or maybe even found on the flash board of a discarded disposable camera.

An even better alternative for Q1 was suggested by Brooke Clarke (a link to one of his web pages analyzing the Joule Thief may be found here) and that is the Zetex ZTX1048A, available through Mouser and Digi-Key for approximately $1 each in small quantities.  This device - like the KSD5041 and 2SC695 - offer increased efficiency by virtue of its very low collector-emitter saturation voltage - an important consideration when one has conflicting needs of both high current and low voltage in a circuit such as this and according to the specification sheets, the '1048 offers the possibility of even lower saturation voltage than the '5041!

Figure 3:
The capacitor flashlight's circuitry.
Click on the image for a larger version.
The two inductors were toroids salvaged from a defunct computer power supply - and even some of the original wire was salvaged!  In this particular power supply - and several others that I have seen - it's common to see several different-sized toroidal inductors and I happened to choose the larger one for T1.

The circuit itself was built "dead bug" - that is, components were hanging in free space, soldered to each other's leads with the entire assembly being "potted" in thermoset ("hot-melt") glue to stabilize the components to prevent shorting and lead breakage.  As can be seen from the pictures a small piece of PVC pipe was used to not only contain the circuit, but also to shield the positive terminal of the capacitor so that it was not possible to accidentally short it out - something that could conceivably start a fire!

The LED itself is a 3-watt Luxeon III Star that I had kicking around but it's not being run at anywhere near its maximum ratings so about any 1-3 watt white LED that you might find would suffice.  While it's not running a watt of power, the converter probably produces too much output for a single, epoxy-cased white LED, but 3-6 identical units in parallel would probably have be fine with the added benefit that they could be aimed so that their built-in lenses could be used to advantage to shape the resulting beam of light.

Originally, I considered putting a lens on the single LED to concentrate the light but I soon realized that without using a special lens designed specifically for this LED I'd end up with less light overall due to optical inefficiencies.  Even with the LED being "bare" its light output is more than enough to be useful, even walking along a mountain trail in the dark, and its beam is broadly cast so that one isn't as subject to the "spotlight effect" of some LED flashlights where you can see only that which is directly in the beam while the surroundings disappear!

To charge the flashlight I set a variable-voltage bench supply at exactly 2.60 volts and then applied it to the connector (not visible in the pictures) which is wired directly across the capacitor.  From a state of complete discharge (0 volts) it will take several hours for a 1 amp bench supply to fully-charge the capacitor!  Whatever you do, do not allow the capacitor's voltage to exceed its maximum ratings or else it may be damaged:  I have no idea what actually happens if you do this, but I wouldn't recommend trying!

It's worth mentioning at this point that my charging method is extremely inefficient since, when using a linear supply, most of the power input would be lost as heat!  A far more efficient (and somewhat more complex) method would be to use a switching converter to provide the capacitor charge current and have its maximum voltage set to 2.60 volts and this would be much preferred in a power-limited situation where one had only battery or solar as the energy source.

* * *

Update - As of the time of this posting (August, 2013) I've used this flashlight for more than a year, now (since August, 2011) - both around the house and at night while hiking in the mountains and in that time I have only charged it once - and it's still going strong.

Additional Update - As of this update (January, 2017) there is still enough remnant of the original charge on the capacitor to power the light to reasonable brightness.  For most of the years this device has been sitting on a shelf, having been used for a while during an extended power failure to find another flashlight.

* * *

While it may sound like this capacitor can store a reasonable amount of energy storage (and it can!) it's worth noting that the total amount of energy stored in one of these capacitors when it is fully-charged (approximately 8200 joules) is in the same ballpark as the amount of energy contained in a single fresh AA alkaline cell!  Anyone who has actually used a reasonably efficient AA-cell powered LED flashlight knows that it's perfectly capable of providing 10's of hours of useful light, so the duration of the single charge thusfar shouldn't be too surprising.  Just for fun, I dug up some typical numbers:
  • For an AA Alkaline cell, given an average of about 1.25 volts and a usable capacity of 2.2 amp/hours at that voltage, this correlates with a energy storage capacity of 9000-9500 joules, depending on load, temperature, end-of-charge voltage, etc.
  • These calculations ignore the fact that some of the energy being stored in the capacitor or battery at low voltage is not usable as the LED's converter circuit will not operate below approximately 0.9 volts and be able to extract energy.  This is arguably a greater factor with the capacitor because by the time an alkaline battery drops below 0.9 volt, it has almost no residual energy (only a few percent, at most) while 10-12% of the original energy remains in the capacitor.

These numbers are a bit misleading since not all of that energy is usable with equal efficiency in each case over the entire voltage/charge range, but it gives a general idea as to the magnitude.

So, does this flashlight actually work?  Yes, it does!

Is this flashlight really practical?  No, not really.

As it turns out the capacitor itself is not only fairly heavy - about 525g (1 pound) - but it is also quite large - 60mm (2-3/8") diameter and 172mm (6-3/4") long - not including the circuity or bolts:  I have fairly large hands and I find myself moving the flashlight from one to the other as I hike along owing to a bit of muscle fatigue from its diameter and weight.  Again, the capacitor itself was $6 from a surplus seller but that was just a fraction of its original cost (perhaps $150-$200) and one could buy an awful lot of AA cells for its original price!

The main advantage of the capacitor is that unlike a battery, it really doesn't have a fixed number of cycles that it will last before wearing out.  Another advantage is that by knowing its voltage, one can precisely gauge how much useful power remains - a tricky proposition with batteries, especially considering that over time, they lose capacity as they age to a degree that isn't easily determined ahead of time.

I suppose that as time goes on capacitor technology will improve and eventually the power/size/weight will approach (and even surpass!) that of conventional battery technology, but until then a flashlight such as this is a bit of a nerdy novelty!

* "BoostCap" is a trademark of Maxwell Technologies 

[End]

This page stolen from ka7oei.blogspot.com
 

Thursday, August 2, 2012

Two repeaters, one frequency (part 2)


In Part One I'd described why it might be advantageous to place multiple repeaters of a linked system on the same frequency.  In short:
  • A single frequency conserves spectrum.
  • Being on the same frequency over the system's coverage area is more convenient to the user as it eliminates the need to try to figure out which frequency might works best for a given area.
  • The whole system is greater than the sum of its parts because of the probability that brief periods of poor coverage may be augmented by another site.
 In the first part only the implementation of the receive portion of the system was discussed in which multiple receivers were used in a voting scheme - that is, a system in which the signals from the various receivers in the system were analyzed and the best one at that instant was sent on all transmitter.

How, then, does one implement multiple transmitters on the same frequency without their clobbering each other?

This comes again to one of the peculiar aspects of Frequency Modulation (FM) mentioned in the first part of this series:  The Capture Effect.  Briefly, this is the tendency for the stronger of two FM signals to override the weaker - and if they are of sufficiently different signal strength, there may not even be evidence of the weaker signal.

As it turns out, for a number of reasons this effect is more obvious on wideband FM as used in broadcast and you may have even observed a different FM station to suddenly "pop in" in an area where there was overlap.  On the narrowband FM used on amateur radio this effect is somewhat less dramatic and "doubling" (two stations inadvertently transmitting at once) is typically detectable by there being a rather obvious squeal and distorted speech behind the stronger station transmitting or, in cases where the signals are almost exactly of equal strength, neither party wins as the two obliterate each other in an unintelligible mess of noise.

What is worth noting in the above example is that the two transmitters involved are:
Figure 1:
Inside the frequency control/crossband repeater unit at Scott's.  There is
an identical unit at the other site at Farnsworth Peak.
The 10 MHz oven-controlled oscillator is in the upper-left corner
while the standard GE "EC" channel element is in the upper-
right corner.  This unit - like its twin - is hand-wired on glass-
epoxy prototyping board.
Click on the image for a larger version.
  • On different frequencies.  It's likely that the two transmitters that operated at the same time were on slightly different frequencies - even several hundred Hertz apart.  This frequency difference resulted in a heterodyne (squeal) that decreased intelligibility.
  • The two transmitters were definitely not carrying the same audio.
As it turns out if there are two transmitters that are both held to very tight frequency standards (within a few 10's of Hz at most) and they carry exactly the same audio, they tend not to clobber each other to nearly the same degree if they are of similar signal strength. What's more is that these "similar" transmissions seem to bother each other less as the difference in their respective signal strengths become greater.

Again, the system is laid out thusly:
  •  Farnsworth Peak is the "hub" and the audio for all transmitters in the system originates from there.  The audio to the auxiliary sites (such as Scott's) is conveyed via a UHF link and retransmitted on VHF.
  • All audio from all receivers ends up at Farnsworth and the "best" audio is what is transmitted to all sites.
  • The auxiliary sites (such as Scott's) are essentially crossband repeaters:  2 meter audio is received and relayed to Farnworth on UHF where it is voted upon and this audio is transmitted from Farnsworth  on UHF where it is repeated on VHF at the auxiliary sites.
What this means is that at Scott's, there's a box called the "Disciplined Oscillator" that contains a precision, oven-controlled 10 MHz oscillator that is capable of holding the VHF transmit frequency to within 1-2 Hz of where it is intended to be.  As it so-happens, this same box also contains the intelligence to function as a controller for a pair of crossband repeaters that goes from VHF to UHF for signals that are received and then again from UHF to VHF as the master audio from Farnsworth is transmitted.  This box also provides a few other basic functions such as timeout timer (in the event a link gets "stuck") as well as providing a Morse ID on the UHF link from Scott's to Farnsworth - just to keep things legal.  This same box also has an RS-485 serial interface to allow it to be connected on a bus with other devices so that it may be remotely controlled, configured and polled as needed.

 When we originally designed the system we anticipated that we may need to adjust a few parameters in order to successfully have two transmitters operating on the same frequency without their causing objectionable mutual interference.  The first - and most obvious - of these was frequency control.

Because we use independent oven-controlled crystal oscillators, we couldn't nail the frequencies of the transmitters down precisely to match each other as would be possible were we to have used a GPS or Rubidium-based reference, but we could count on their being within 1-2 Hz of where we had parked them.  Once the system was put on the air we solicited the help of someone who happened to live in an area where the strength of the two transmitters was precisely equal and then tweaked the frequency offsets and then made a subjective analysis as to what was "least annoying."

As it turned out, there were two ranges that seemed to be reasonable in terms of frequency offset:
  • 3-6 Hz offset.  This caused a bit of a "whooshing" sound if the two signal strengths were fairly close and fairly weak.  If the signals were exactly the same strength then the periodic nulls could cause it to drop out briefly and make the signal unintelligible, but even a slight reposition of the receive antenna could mitigate this, however.
  • 40-60 Hz offset.  This caused a buzzing somewhat akin to the sound of a subaudible tone as heard on a signal with severe multipath distortion.
Ultimately, we settled for the 3-6 Hz offset as it was deemed to be the most "user friendly" overall - especially when one considered that one was by far more likely to be traveling mobile through the overlap areas than stationary and that the Doppler shift of a moving vehicle might not only exceed the amount of frequency offset anyway (if it was only 3-6 Hz, at least) but that the "dwell" time in a precise null where the signals of multiple transmitters canceled each other out was going to be extremely short.

Another factor often considered in multiple-transmitter systems is that of audio delay to match the time-of-arrival of the different distances between transmitters - plus additional delay in the audio links used to tie the disparate systems together.  Before we were to go through any hassle of adding an audio delay somewhere, we first wanted to see if it was really going to be a problem in the overlap areas, anyway.

It wasn't.

The only thing that we did do was observe the audio phase at and below 1 kHz and then, using the ability to select either a 0 or 180 degree audio source, set them as close as we could.

So, what does it sound like in the overlap areas?

First of all, the coverage of the sites and their geography meant that about the only significant overlap areas were in canyons to the east of the Salt Lake area where signals from either transmitter would already be subject to multipath, anyway.  As it turns out, traversing these area it's rather difficult to tell where the coverage of one transmitter begins and the other ends - and it often goes both ways.  In those area that do have severe overlap the contention between the two transmitters sounds little different than typical mobile flutter - perhaps slightly "faster" than typical 2-meter flutter but not as fast as what might be heard on a 70cm repeater in an area with severe multipath!

In Part 3, a bit of "nerdy" technical information about how the various parts work...

[End]

This page stolen from ka7oei.blogspot.com
 

Monday, July 23, 2012

QRP CW on the trail in the Wind River mountains of Wyoming

Over this past week I got the chance to do something that I haven't been able to much recently:  Run QRP HF from the trail while backpacking.   This trip was a 5-day backpack treck in the Wind River mountain range of Wyoming (near-ish Pinedale) with the vast majority of it being above 10,000 feet (3000 meter) elevation.

Unfortunately, I didn't get a chance to operate as much as I'd wished:  There's something about being tired after carrying a 50+ lb (23kg) full-frame backpack full of everything you need to survive for 8 miles (over 12km) each day at such a high altitude that sometimes makes you want to just lay down for the evening rather than run around finding trees and rocks suitable for stringing an antenna!

As it turned out, only 2 QSOs were made on the trip - and at least some of this "minimalist" result was due to the fact that the HF bands seemed to be badly disturbed due to high solar activity:  Only 40 meters seemed to be remotely usable during the times I could operate (approximately 6-8pm local time).
Figure 1: 
The ATS-3A (lower-left) and its band modules (top).  In the lower-
right is a modified Hendricks Altoids (tm) longwire tuner.
Click on the image for a larger version.

The station was fairly simple:  A Sprint ATS-3A (see an EHam review of this series of rigs here) running into a modified Hendricks Altoids Longwire (end-fed) tuner and using about 60 feet (18 meters) of end-fed wire with a similarly-long counterpoise running in the opposite direction.  The power source was a block of 8-10 AA cells (a mix of NiMH and a pair of "dead" alkaline cells from the GPS receiver) yielding a 9-13 volt DC source allowing the transmitter to produce between 2 and 4 watts.

As it turns out I could hear something on about every band that I tried (40, 30 and 20) but on all but 40 meters signals were extremely weak - even though I could hear an obvious difference in the background (atmospheric) noise on any band when I connected/disconnected my antenna!  On the two occasions that I was able to get on the air, 40 meters sounded rather "odd" - very rapid fluttering and a "hollow" sound often associate with auroral disturbances.


Figure 2: 
The carrying bag with the entire multi-band HF station,
sans power source and antenna wire.  A "nerd knife"
is shown for size comparison.
Click on the image for a larger version.
In tuning around on 40 meters I heard more than one instance of the CW note of the station being received being "split" into two separate CW notes a few Hz or 10's of Hz apart:  In many instances, the CW signals were so badly distorted that I wasn't able to make much sens of them - except during the brief periods when the distortion relented and the signal momentarily "cleaned up".  It was in those instances that running, say, 7-10 WPM would have been required instead 15-18 WPM that I was hearing!

Perhaps the weakest link of the QRP station was the antenna.  While I had a total of 120 feet of wire to spread out, it was rather difficult to find somewhere to sling that much wire, get it up a reasonable height into the air and still place it somewhere convenient to the operation station!  Being that the entire station was to be as lightweight as possible (since I was carrying it!) I had to forgo things like coax, so the antenna and counterpoise wire pretty much had to end where the tuner and CW transceiver was located:  Being that we were swarmed by bloodthirsty mosquitoes and deer flies it was most advantageous to operate from within the tent - yet another restriction on where our antenna wire could run!

For the most part the antenna consisted of the 60-ish feet of wire suspended only 6-16 feet (2-5 meters) above the ground since there never seemed to be conveniently-located cliffs nearby nor very tall trees just above our heads where we camped - which was pretty much at or above the tree line!  The best we could do was to either sling the wire across a slight gully or run it up the hillside, paralleling the ground, and tie the far end to the rather short trees found at those altitudes.

As for the QRP gear itself, it worked pretty well:  The Sprint ATS-3A (by Steve Weber, KD1JV) worked flawlessly - as did the tuner.  Connected to the radio instead of earphones so that both of us (the other party being Brett, N7KG - whose callsign we used for one of the QSOs) could hear the audio was a modified Radio Shack amplified speaker.  For CW keying I used my homebrew, lightweight portable paddles - easy enough to use at 12-15 WPM, but rather clumsy at speeds higher than this.

Figure 3:
The environs of one of the night's QRP operations in the Wind River range of Wyoming.
Click on the image for a larger version.

For power on the trip I brought along a folding 12 watt solar panel and with this I charged both camera and HT batteries and once those devices were charged I reconfigured things to top off the NiMH AA cells as appropriate.  I'd started out with 10 fully-charged AA cells, but as my GPS receiver ran down a pair, these would be swapped into the 10-cell holder.  For charging the depleted cells I would pop out the remaining fully-charged AA cells and jumper the empty positions with a clip lead to prevent their overcharging.

As it turned out the 12 watt folding panel was almost adequate for the needs:  At best we had only 2-3 hours of sun when we arrived at camp in the evening and another hour or two of sun in the morning before departing, not leaving too much time to top everything off!  Had I been out another 4-5 days, I'd have to have rationed power, but this situation could have been improved had I been able to devise some sort of "MPP" (Maximum Power Point) "switching" type charger for the panel rather than simple, inefficient brute-force charging which amounted to either dumping 12-15 volts into a linear 5-volt regulator for USB-type charging devices (my HT, for example) or simply shunting across the panel to top off the AA cells!  Had I some sort of efficient energy conversion system I could have likely doubled my overall charging power efficiency!

[End]

This page stolen from ka7oei.blogspot.com

Monday, July 9, 2012

Solargraphs, or how to get a (sort of) color photo with black-and-white paper

Several years ago on "Astronomy Picture of the Day" I saw an article about Solargraphs - a topic that recurs ever year or so.  Essentially, this is a long-exposure pinhole camera set outside on a (mostly) static scene.

The technology is quite simple:
  • Empty a soup can by eating its contents.
  • Empty an aluminum soda pop can by drinking its contents - or finding someone do that for you.
  • Get black-and-white print paper from wherever - preferably not panchromatic, but use what you can find.
  • Cut a 1cm (1/2") window in the soup can
  • Cut a 3-4cm (1-1/2-2") square piece out of the middle of the side of the soda pop can
  • Punch a small (0.5mm) hole in the middle of the above piece of the pop can.  Try to make the hole as "clean" as possible.
  • Using black electrical tape, attach the piece of aluminum to the side of the soup can, centering the the pinhole over hole in the soup can.  Make sure that the attachment point is light-proof, hence the use of black electrical tape - but don't cover the pinhole!  (Duct Tape isn't necessarily light-proof so I'd avoid it.)
  • Fashion a lid for the top of the soup can.  I've been able to take the rest of the flat aluminum side of the soda can and form it over the top of the soup can (careful of the sharp edges!) and make a lightproof "hat".
  • In subdued light - preferably dim red, say from a photographic safelight, an astronomer's flashlight or a red LED (if you used Orthographic paper) cut a piece of photo paper that is about the height of the inside of the soup can and will wrap most of the way around the inside.
  • Inside the can, place the paper, the emulsion side facing toward and centered on the pinhole.
  • Using black electrical tape again, tape the "hat" on top of the can to make it light-proof.
  • Gently put a piece of tape over the pinhole.

Photographic print paper is a bit harder to get locally than it used to be:  When I got my supply, I went to a local photo shop that I'd gone to years ago to get my developing supplies and asked where they kept the paper.  Somewhat to my chagrin, I was directed to a single shelf in a corner where there was a sparse assortment of miscellaneous, dusty items.  The only good part about this experience was that everything on this shelf - including the three remaining packs of photo paper - where on sale and heavily discounted.  The paper itself was more than a year out of date, but that's really not much of a concern for black and white paper stored at reasonable temperatures and is even less important when used in a solargraph!  (On the web, it's very easy to find photographic print paper for cheap, so don't despair if you can't get it locally.)

Figure 1:
A solargraph - pointed mostly south - at my workplace.
Click on the image for a larger version.
You now have a rudimentary  pinhole camera.  Place it outside or in a window, attached to something solid that will not move (avoid a tree, if possible as it sways and bows with wind and season).  I attach it using nylon wire ties to a post, but one could duct-tape it - but make sure that it is solid and will NOT shift!  It is also best to place it such that it will not be directly exposed to rain and snow - a wooden or metal "hat" attached just above it on a pole or fence post outside would be good and don't forget to remove the piece of tape placed over the pinhole during assembly!


Now, leave it alone for several months, taking good notes on where, exactly, you left it if you happen to be placing it out of the way - say, in the woods.

It is recommended that one orients the pinhole of the solargraph such that it faces toward the sun (e.g. south for those in the northern hemisphere) but a view toward the east or west is also good.  If you live way north or south (arctic/antarctic circle) then it might be interesting to point it north (or south)-ish during the summer-ish months.

Results of some of these solargraphs may be seen in the attached images.  The above is the first solargraph that I'd done, taken by attaching a soup-can camera to a metal railing at my work in a location with a south-facing view showing the thick arc of the sun as it changed its elevation over several months in the year.  In the images are ghostly vestiges of snow and vehicles that were intermittently present during its exposure.

When I did my first solargraph I didn't know what to expect and upon removing the paper (again, in subdued, red light) the image was a bit faint, so I did what anyone would do:  I immersed the photo print paper in some warm coffee to develop it* - and develop it did, much more quickly than I'd expected!  Before I knew it, the paper had turned a very warm brown from being overdeveloped and not just from the coffee!

At this point, the image was negative, of course, so I placed it on the photo scanner and digitized it, protecting it from light beforehand:  After scanning I noticed that the paper was even darker (and of lower contrast) than before so it's best to NOT scan it more than absolutely necessary since you are, in fact, "burning" away the image by doing so!
Figure 2:
Winter/Spring exposure looking southwest from a location in the
mountains central Utah at about 8400ft (2560m) elevation.
Click on the image for a larger version.

Once digitized it's easy to invert the image and as if by magic, there's a semblance of the original scene - and, amazingly enough, in color - on black-and-white print paper!

I'm not sure exactly how the color part works, and to be sure, it really doesn't work very well so the pictures on this page have been somewhat color and contrast-enhanced using GIMP, a free image manipulation program, but I'm sure that you'll agree that the results are tantalizing, if not amazing!

The second solargraph was taken over a much longer period of time (5 months) and placed at a remote cabin in central Utah, attached to a support on a deck.  From the track of the sun one can clearly see the apparent motion of its track from the winter solstice to well into spring - a month or so shy of the summer solstice.  On the ground for much of this time was snow, but it being white its image remained even though the ground was at least partially bare for the last month or so of the picture.

With its longer and more complete exposure - and now knowing what to expect - I removed the paper from the soup can camera and placed it directly into the scanner (again, in subdued light) and after inverting and a bit of brightness/contrast/color tweaking, got the picture you can see and with the more thorough exposure, the result was much better than the first attempt.  The scene itself lacks color, but that's generally true of winter!

Again, one of the most fascinating aspects of this is that negative pseudo-color images result with the use of black-and-white photographic print paper - and without the need for any developing at all!  It's also interesting to see the changing path of the sun over a period of months - and even of cloudy days as evidenced by the breaks in the yellow sun tracks!

 * As it turns out, some of the constituents of coffee act to develop photographic film and paper - not particularly well, as it turns out, but it does work and is described on a lot of web sites.

[End]

This page stolen from ka7oei.blogspot.com

Monday, July 2, 2012

Two repeaters, one frequency (part 1)

These days, finding a frequency to expand ones repeater system can be a challenge - even in "rural" parts of the country such as Utah where the Salt Lake area is about the only large population center for hundreds of miles.
Figure 1:
The Scott's Hill site, part of the UARC 146.620 system
Click on the image for a larger version.

Typically, a linked repeater system consists of several repeaters tied together on a backbone frequency and each of these individual repeaters is usually on its very own frequency:  About the only time that frequency re-use is implemented is if several of these individual repeaters are located far enough apart that they won't bother each other and it is often the case that different subaudible tones are used to prevent mutual interference should a user be in an area with potential overlap.

More than a decade ago the Utah Amateur Radio Club decided to expand the coverage of its 146.620 repeater and a mountaintop site was secured - a story in and of itself to be told another day, perhaps.  As things often happen the project lay fallow for several years until a set of circumstances provided the ambition and impetus to push it along farther.

From the beginning, the intent was to have a "Synchronous" and "Voting" repeater on this other site, Scott's Hill, that was to share the same frequency as the original repeater on Farnsworth Peak, but putting together such a system was understandably more involved than the typical linked (but each site using a different frequency) repeater system.

The original repeater on Farnsworth Peak provides impressive coverage, from north of the Utah/Idaho border, west beyond the Utah/Nevada border, to the south into parts of central Utah but pretty much stopping at the Wasatch range to the east of the Salt Lake metro area.  For the most part, the coverage of Salt Lake area repeaters is limited eastward by the abrupt rise of an 11,000 foot mountain range along the east side of the populated areas and unless a repeater is located atop those mountains, coverage to the east is minimal.  Unfortunately - or fortunately - repeaters located in the Wasatch intended to provide coverage to the high valley areas east of the Salt Lake Valley tend not to provide good coverage into the Salt Lake valley itself owing to the shielding effects of the mountains themselves - that is, the taller peaks on which repeaters are placed are generally set back a bit and the somewhat lower "front" peaks to their west tend to block the view of the valley.

Scott's Hill is such a site:  It sees well from the East through the Northwest but it can actually see none of the Salt Lake valley to the south and west.  It does, however, have a good, line-of-sight view of Farnworth Peak, so the linking between the two sites is pretty easy.  This general exclusivity of coverage also means that having the two repeaters effectively sharing the same frequency would be simplified as there were relatively few places where the two would overlap with comparable signal levels.

Figure 2:
Voting controller for the 146.620 system.
Click on the image for a larger version.
Now, how does one go about putting two repeaters on the air, on the same frequency, without their clobbering each other?

Multiple receivers on the same frequency:

For receive, the answer is pretty easy:  Voting receivers.

On a "Voting" system, one typically brings the audio from all of the separate receivers to one central location and there, they are all analyzed for signal quality and the best of the lot is selected and used as the audio source for the entire system.

Compared to the typical linked system where the user selects which repeater/frequency is to be used, there are advantages to having ONE frequency with multiple (voting) receivers:
  • Easier to use.  If there is only ONE frequency, the users don't have to constantly change to the best frequency for the area from which they are transmitting - assuming that they know which is the best for their specific location!
  • Frequency re-use.  With a voting system, only ONE frequency is required which can save a bit of spectrum.
  • The whole is greater than sum of the parts.  On a multi-receiver system, it's typical that while one particular receiver works best for a specific area, it's also likely that the less-optimal receivers will also provide a degree of coverage in that same area.  If one enters an area where coverage is a bit "spotty" on the primary-coverage receiver, there's a reasonable chance that one of the other receivers may be able to still hear the mobile and "fill in" - all of this without the user having to worry about it!
  • The addition of even more receivers.  Once the "base" voting system is installed, it's practical to install additional "fill" receivers for those areas where better coverage might be desired:  These extra receivers need only be a simple receiver and link transmitter rather than a full-blown repeater requiring a lot of expensive filters.
While there are a number of ways that voting systems can work, pretty much all of them exploiting a "feature" of the frequency modulation (FM) that we use on our VHF and UHF bands:  Quieting.

You have probably noticed that as an FM signal sounds the same whether it is very strong or weak - at least until the signal gets to be really weak - at which point it starts to sound noisy but NOT quieter!  If one were to listen carefully, it might also be observed that the noise tends to start out at the higher frequencies first - and this is how a radio's squelch works:  It listens for the high-pitched hiss that starts to show up as the signal gets weak.

Most voters listen for this "hiss."  On a typical system, since all of the receivers are listening to the same audio being transmitted, the one with the least amount of hiss is, in fact, the one receiving the best signal.  If you think about it, all one really needs to determine is which one has the least amount of "audio plus hiss" as the only thing that will be different among the receivers with different-quality signals will be the amount of hiss on them.

Ideally, one would do this comparison at the receiver itself where one has access to the "guts" of the receiver and can look at the "discriminator audio" where the spectral content can go into the 10's of kHz.  Practically speaking, however, we have to link these individual receivers back to one site for the voting and conventional FM link radios can't pass the 10's of kHz of audio necessary to do this so the "audio plus hiss" scheme is used.  The voter on the 146.620 system works this way, mostly looking at the higher-frequency audio (e.g. above 2.5-3 kHz) to determine which of the inputs has the "best" signal (e.g. least "audio plus hiss.")

There are other ways to do this - including digital means where precise signal quality measurements are telemetered to the main controller - but our intent was to construct the entire system using "off the shelf" radio modules that were available on the surplus market so that there would be a reasonable hope of it being maintained in the future.

This article - including more details on two transmitters sharing the same frequency - continues in Part Two.

[End]

This page stolen from ka7oei.blogspot.com