This time: A discussion on the addition of adding audio filtering, decimation/interpolation and fractional I/Q phase adjustments for both receive and transmit.
Doing audio processing with limited resources:
Being that the mcHF is entirely self-contained and has a reasonably powerful, yet modest processor (the STM32F405 or '407 processor running at 168 MHz - a device with an ARM Cortex M4 core that includes hardware floating-point support) there are practical restrictions on how much number-crunching one can get away with when performing various tasks. As you might expect, the most time-critical task is the "real time" processing of the receive data from the dual A/D converters as I/Q channels into demodulated audio.
If one goes through the literature on typical SDRs in the amateur world it quickly becomes apparent that limited processing power is often not a prime consideration. This is not too surprising considering the fact that multi-GHz, multimedia processors in desktop computers are ubiquitous, so processing power is considered to be "cheap" and very often a lot of processing is thrown at a problem simply because it is available!
For example, one might implement an FIR (Finite Impulse Response) audio filter with 512 or even as many as 2048 taps without batting an eye on a PC and still have resources to burn, but try that on the processor in the mcHF and you'll have suddenly used up the vast majority of processor power (if not all of it - and more!) on that one task!
So, one starts to ask various questions like:
- How can I do something while using less processor horsepower?
- How "good" is "good enough"?
- If I cut a corner somewhere, how detrimental will it really be in practical, real-world situations?
Receive signal processing:
The mcHF uses a standard, inexpensive "sound card" type codec (a Wolfson WM8731 or the compatible TLV320AIC23) for all A/D and D/A. This is a 16 bit device capable of a number of sample rates from 8 to 96 ksps (KiloSamples Per Second), but in the mcHF it is typically operated at 48 ksps, this to allow the visualization of the spectrum +/- 24 kHz from the center as can be seen in the picture at the top of the page.
Being a typical SDR one of the first steps in signal processing is the same as that of any typical "phasing" rig - analog or digital - and that is to impart a differential 90 degree phase shift on the audio channels, a task that is, in this case, accomplished with a "Hilbert" transform.
Now, at the beginning the mcHF's audio path was "baseband" based, which is a confusing way to say that all demodulation was done around "zero Hz" or "near DC": A demodulated CW tone tuned to yield 700 Hz was, in fact, 700 Hz away from the local oscillator frequency. This is not an ideal situation for a number of reasons (which will be covered in a later installment!) but it was, from the beginning, the easiest thing to do.
Because the demodulation was done "near DC" this meant that the Hilbert transformer had to be of the "0 degree/90 degree" types rather than the more typical "-45 and +45" degree type: While the former can be made to behave fairly well at low audio frequencies with a reasonable number of FIR taps, the latter cannot! An 81-tap FIR-based Hibert transformer is used to provide the audio phase shift, providing reasonable performance down to a couple hundred Hz - as low as we need to go for SSB!
Once the two audio channels are set 0/90 degrees from each other one can then do the math to convert the two channels into USB and LSB - which is just addition or subtraction of these channels: Which does which depends on which phase happens to have been assigned where in the hardware. This demodulation converts the separate I/Q channels into just ONE audio channel, but it is not yet bandpass-filtered!
Working with limited resources:
IIR instead of FIR audio filtering:
Originally the code used a combination of FIR low-pass and bandpass filters with 48 taps to provide the receive audio filtering but because the receiver sample rate was 48 ksps this meant that with so few taps that it was not practical to define a very "sharp" audio filter. To get a "properly sharp" FIR audio filter at 48 ksps would require, perhaps, 3-4 times as many taps as that and commensurately larger audio buffers and an increased amount of processor overhead, so I decided to take a different approach.
Most of my embedded programming has been using fairly low-end PIC microcontrollers (the PIC16 and PIC18 families) and I have used both FIR and IIR filters on these devices of rather low complexity since the resource of these processors are extremely limited. Having cut my teeth on these types of filters I was comfortable enough with IIR filters that I was not scared of their (somewhat undeserved!) reputation of being "inherently unstable" and I also knew from experience that IIR filters could, when properly finessed, offer superior performance to FIR filters in situations where one needed to severely limit the amount of processor overhead, particularly when one has floating point math available.
Having access to MatLab and its filter design/simulation tools I soon had working some fairly low-complexity IIR filters thanks to the built-in support of the CMSIS DSP library (link) that offered performance that was far superior to the original FIR filters with reasonable "Shape Factors" (e.g. the ratio between the passband and attenuation response).
Decimation to reduce processor loading:
So it remained like this for several versions of code: All of the audio processing was being done at 48 ksps but I had not yet taken advantage of another trick available to reduce processor overhead: Decimation.
At this point I was still crunching numbers at 48 ksps - this, to produce audio that had no components higher than 4 kHz or so! Nyquist tells us that to process such signals we need not sample at any higher than 8 kHz, so running at many times that sample rate was simply wasting CPU power!
The term "decimation" in DSP terms simply means keeping 1 out of N samples and throwing the rest away, and if you have fewer samples, there are fewer numbers to crunch. For practical reasons - sometimes those dictated by available library functions and/or the sizes of buffers - it is usually best to pick "N" as a power-of-two (e.g. 2, 4, 8, 16, etc.) For the second of these reasons (e.g. the audio "chunk" size was 64 samples - a number that was NOT evenly divisible by 6!) my best choice was to decimate-by-four to reduce my sample rate from 48 ksps to 12 ksps.
If you throw away samples you must still do something about the audio spectral content that would be above the Nyquist limit at the new sample frequency, so one must first low-pass filter and remove those signals and that meant that I had to get rid of those signals that would be at 6 kHz and above!
Comments:
- In theory, a decimation-by-8 to yield a sample rate of 6 ksps would have worked since, for normal SSB, my maximum audio frequency could be less than 3 kHz. The problem is that this would have placed my Nyquist limit very near my desired maximum frequency of 2700 Hz or so - and in the audible range - requiring pretty tight filtering. The added complexity of such filtering may have countered any gains afforded by the reduction in sample rate, plus it would have precluded the use of "wide" audio filters (such as the 3.6 kHz filter) for SSB and AM unless I were to have added yet another decimation rate! (For CW or Digital-mode filters this would probably be fine...)
The CMSIS library for the ARM M4 processor contains a handy decimation function with a built-in FIR low-pass filter, but number-crunching (with the aid of MatLab) showed that I'd need quite a few FIR taps - and additional processor overhead - to get the needed 60dB or so low-pass filtering that was required!
Fortunately, I already had a low-pass filter at hand: The Hilbert transformer!
Using the free "Iowa Hills" filter design tools I designed a new Hilbert Transformer that was identical to what was already in there, except that it had a low-pass cut off starting at about 3.6 kHz or so - just about right for the 3.6 kHz audio filter in the radio - and with 81 taps it provided reasonable (>=55dB) worst-case low-pass filtering to prevent aliasing at and above 3.6-ish kHz, the highest audio frequency that my audio filters would pass.
At this point I'll say a few words about "appropriate" audio filtering. I really didn't care too much if the filtering above this frequency was insufficient to prevent "audible" aliasing since it was going to be removed by the SSB filters anyway.
For example, if one picked, say, 4 kHz has the highest-frequency signal that was going to get through the 3.6 kHz audio filter (e.g. a strong CW signal down on the "skirts" of that filter) we know via math that its alias would be at 12 - 4 = 8 kHz. What that means is that worst case, our strongest alias signal - and the "closest" to our audio passband - would be at 8 kHz, so our filter must attenuate adequately - by some 60dB or so - between the 3.6 kHz representing the "top" of our widest filter and 8 kHz.
On this point I actually "fudged" a bit: The filtering was adequate to knock it down only by 50 dB or so, but since the 3.6 kHz filter was not going to be used very often I looked, instead, at the numbers for the "2.3 kHz" filter - which actually passes audio between about 300Hz and 2600 Hz, and its steep skirt is around 3100 Hz or so. Taking 3100 Hz as our "new" highest frequency the alias would be at 12 - 3.1 = 8.9 kHz and this extra (almost) 1 kHz of roll-off provided another 10 dB or so on our low-pass filter.
Because the CMSIS decimation function required that I put some FIR low-pass filtering in place - that is, I could not use the function without it - I used as few as FIR taps as practical, finessing them with MatLab so that they, too, did as much filtering as they could with those few taps and achieved something in the 10-20dB area (depending on frequency) on top that achieved with the Hilbert transform, so we now had our target of at least 60 dB!
At the output of the decimator I now had to work at 12 ksps rather than 48 ksps which meant that I had to redesign all of my audio filters, but by keeping them as complex as they had been before - even though the sample rate was now lower - meant that they could now be made to be made "sharper" than before and thus offer higher-performance!
Since I had one forth as much audio data to crunch, more processor horsepower was now available to do other things, allowing to add additional features in the future!
Interpolating back to 48 ksps:
At the end of the audio filtering and AGC processing I had another problem: The D/A converter still operated at 48 ksps so I had to do an "interpolation" step to up-convert from 12ksps. Here, too, one must do a bit of low-pass filtering, but for a different reason: The raw 12 ksps audio would contain aliased audio if upconverted directly to 48 ksps and this can be heard (if your ears are good enough) and could be extremely annoying!
As with the decimation, there is a CMSIS library interpolation function and it has a built-in FIR low-pass function, but there is no real need to make this filtering particularly strong.
In the worst case scenario with the 3.6 kHz audio filter selected, there will be audio content at (12kHz - 3.6 kHz = ) 8.4 kHz - but nothing below that, so a fairly weak low-pass filter with relatively few FIR taps (to minimize processor loading!) was designed in MatLab. This filter was designed to start rolling off above 3.6 kHz and by the time it got to 8.4 kHz it was attenuating by 25dB or so and was down by 30-40 dB the time it got to 9-10 kHz, where the vast majority of the aliasing energy would be when one operated the most-used 2.3 kHz audio filter: Because of the way that the human ear works, the "clutter" at the high audio frequencies, knocked down by 25+ dB would probably not even be noticed by someone with even the most acute hearing!
Upon getting this code operational, I fed the LINE OUT from the receiver into the Spectrum Lab program with a sound card running at 192 kHz and verified that the low-pass filtering did, in fact, reduce the aliased signals by the predicted amount. I then routed the audio into full-range speakers and, with a graphic equalizer, intentionally boosted the highs by 10-15dB in an effort to accentuate the aliasing signal but even in a worst-case scenario with single tones the aliasing was pretty much inaudible.
Comments:
- For the 10 kHz audio filter mode, decimation/interpolation-by-2 was used and similar tricks were used with the Hilbert transformer to minimize processor loading. In this case the interpolation's low-pass filtering was far less effective since the Nyquist frequency is 12 kHz so the output contains aliasing components that are only 6-15dB down, worst-case. Even in this case, with a 9.8 kHz tone - with the alias at 24 - 9.8 = 14.2 kHz - the psychoacoustical properties of human hearing make it somewhat difficult to hear this tone.
- The codec chip is also capable of operating at 8 ksps, both for A/D and D/A operations. While this would greatly reduce processor overhead, it would limit the view on the spectrum scope to just +/- 8 kHz! If there had not been enough processing power to do what was needed to be done this may have been a necessary strategy to take, but with voice modes, it turned out not to be needed. In the future when digital modes are contemplated and additional processing power may be needed - but a wide "spectrum scope" is not - the use of an 8 ksps rate will be considered.
One of the problems with real-world hardware is that there will inevitably be variations in the I/Q phasing and amplitude of the audio as it is processed by the two audio A/D converters. This phase difference could be from a slight shift in the local oscillator signal or, more likely, it could be due to component variations in the analog circuitry comprising the mixer and filtering that precedes the A/D converter. Whatever causes this problem, it must be addressed to maximize the opposite-sideband rejection.
Addressing the amplitude imbalance is pretty easy: One simply multiplies the amplitude of the I/Q channels by a small, fractional number made adjustable by the user. Typically both channels are multiplied by equal and opposite amounts so that the total amplitude remains generally constant.
Phase adjustment, on the other hand, is trickier!
In my researching how this is done on PC-based SDR implementations I saw that there appeared to be handy, on-the-fly calculations phase adjustments that could be done using various library functions that would transform the I/Q signals. While such functions may have existed somewhere in the bowels of the libraries available to me, real-time calculation of the phase for each sample that came in would represent a prohibitive cost in terms of processor power!
So, how would one take care of this problem with a minimum of overhead, preferably with a one-time calculation?
It struck me that if I could modify the Hilbert transformer in a "fractional" manner I might be able to effect a fractional phase adjustment. Since calculating the 81 coefficients internally was out of the question (I didn't want to figure out how to do that from scratch, plus it would have been a pain if I needed to modify the Hibert transformer in the future if I wanted to modify its parameters!) I wondered if I could "tweak" a fixed set of coefficients and not "break" the transformation to a significant degree?
Using the Iowa Hills program I calculated four sets of Hilbert coefficients: 0 degrees, 90 degrees, 89.5 degrees and 90.5 degrees. In the mcHF, the 0 degree set (the "I" channel) would remain constant, but if the I/Q phase needed to be adjusted, the data from the nearest "alternate" set of coefficients for the "Q" channel would be proportionally blended. For example, if 89.95 degrees was needed, it would use 10% of the 89.5 degree set and 90% of the 90.0 degree coefficients and this new data would be input to the Hilbert transformer.
Putting this new code into the receiver, using a signal generator, and making very careful measurements at the "worst case" settings of 89.75 and 90.25 degrees (and a few points in between) I measured only very minor amplitude (for which I could compensate!) and phase degradation in the performance of the Hilbert transformer due to these "straight line" approximations and called it good!
This method of phase adjustment is applied for both receive and transmit, but in practice it has been observed that far more amplitude adjustment is typically required than phase adjustment to effect optimal opposite-sideband rejection, at least if good-tolerance components - especially capacitors - are used in the receive and transmit paths!
Comment:
- On my mcHF transceiver I have typically required well under 1/10th of a degree of phase adjustment, so the available range of +/- 0.5 degrees is probably overkill!
What about transmit?
All of the above tricks could be applied to transmit, but thusfar, there has been no need to do any decimation/interpolation - just the Hilbert transformations, amplitude and phase adjustments, some audio processing and filtering - topics for later discussion.
For transmitting, since fewer operations need to be accomplished than in receive, everything still operates at 48 ksps with processing power to spare. If it does become necessary in the future, I could decimate/interpolate within the transmit function and "regain" a significant number of CPU cycles!
How does the receiver sound "on the air"?
In tuning around with this receiver, the result of this processing - which is all done with floating-point arithmetic - is at least comparable to any of my other all-analog radios in terms of filter performance, adjacent channel rejection and opposite-sideband rejection: Even under "contest" conditions with a very strong signal "next door" the receiver seems to be perfectly capable of holding its own!
While a few shortcuts had to be taken to reduce the amount of processing to an amount that could be handled by this radio, it does not seem to have demonstrably compromised its receive performance in actual, real-world conditions!
[End]
This page stolen from "ka7oei.blogspot.com".