Polyphase filters and non data aided timing synchronization!

In case of no data aided synchronisation methods, Early/Late Gate clock recovery method is used to get an estimate of the offset between transmit and receive symbol timing. This method uses 3 samples per symbol – one at the optimal sample time, one that is 1 sample delayed and 1 that is one sample advanced. 

“This approach works by setting up two filterbanks; one filterbank contains the signal’s pulse shaping matched filter (such as a root raised cosine filter), where each branch of the filterbank contains a different phase of the filter. The second filterbank contains the derivatives of the filters in the first filterbank. Thinking of this in the time domain, the first filterbank contains filters that have a sinc shape to them. We want to align the output signal to be sampled at exactly the peak of the sinc shape. The derivative of the sinc contains a zero at the maximum point of the sinc (sinc(0) = 1, sinc(0)’ = 0). Furthermore, the region around the zero point is relatively linear. We make use of this fact to generate the error signal. If the signal out of the derivative filters is d_i[n] for the ith filter, and the output of the matched filter is x_i[n], the error is calculated as: e[n] = (Re{x_i[n]} * Re{d_i[n]} + Im{x_i[n]} * Im{d_i[n]}) / 2.0 This equation averages the error in the real and imaginary parts. There are two reasons we multiply by the signal itself. First, if the symbol could be positive or negative going, but we want the error term to always tell us to go in the same direction depending on which side of the zero point we are on. The sign of x_i[n] adjusts the error term to do this. Second, the magnitude of x_i[n] scales the error term depending on the symbol’s amplitude, so larger signals give us a stronger error term because we have more confidence in that symbol’s value. Using the magnitude of x_i[n] instead of just the sign is especially good for signals with low SNR. The error signal, e[n], gives us a value proportional to how far away from the zero point we are in the derivative signal. “

1. It is said that “the filters corresponding to the early and late gate filters are trivially the polyphase segments (k – 1) and (k + 1) when testing polyphase segment (k).”  what is theory behind this .

2. Suppose I have a 256:1 decimation filter with an input sampling frequency of 256 kHz with total of 256*8 taps (Or 256 filter banks with 8 tap each). These are linear phase FIR filters, each filter bank having a data rate of 1 khz. Consider following two scenario:

i) I start with Polyphase filter bank index 1 and carry on convolving input data and roll over polyphase bank index at 256.

ii) I start with Polyphase filter bank index 2 and roll over polyphase bank index at 256.How does this result in timing adjustment ?  

There are many parts to above questions and it will be good to understand them separately.

a) First to understand how early late gate works in terms of getting a timing estimate ?    At this stage ignore polyphase filters etc.
So you have a simple system which uses a matched filter to transmit a signal and a matched filter to receive a signal. Assume that you are using a matched filter which is box shaped.
Upon correlation you have a triangle like shape. The correct sampling time is the peak and the two samples to
 the left and right can be used to estimate the error.


b) Next concept to understand:    Using a regular filter (no polyphase), we can change the index of the filter and get a timing change.    How to see this:    Normally the sequence is like this:  

output1 =  x0*f0 + x1*f1 + x2*f2 + x3*f3 + x4*f4  

output2 = x1*f0 + x2*f1 + x3*f2 + x4*f3 + x5*f4 
How to make output2 almost same as output1? change the index of the filter:
 new_output2 = x1*f1 + x2*f2 + x3*f3 + x4*f4 +x5*f0

Only the last term is different from what we wanted in terms of timing change.
This error is small and will only last for the symbol where we made the timing correction.

c) Finally let us look at polyphase filters.    A polyphase filter is nothing but a filter. It only has a faster implementation.It is mostly used in decimation/interpolation. This is a extension of the above.

Now extend example above for timing correction using regular filter.

Divide normal filter into polyphase sub filters. So, normal filter { f0 f1 f2 f3 f4 f5} is divided into three polyphase sub filters as P0 ,P1,P2.

P0 = {f0, f3} P1 = {f1, f4} P3 = {f2, f5}. And applied same principle of changing filter bank index. You will get one term difference with timing corrected output. question now is shouldn’t output2 in this example be exactly same as ouput1 ? 

Shouldn’t Y(Advanced) to be exactly same as Y(1) in this example.

D1,D2,D3 are three delay lines.

State 1 : At Time t = -1

P0 = {f0, f3} D1 = {x2, x5} P1 = {f1, f4}D2 = {x1, x4} P3 = {f2, f5} D3 = {x0, x3}Output from each sub filter y0 = f0*x2 + f3*x5 y1 = f1*x1 + f4*x4 y2 = f2*x0 + f5*x3 Final ouputY(-1) = y0 + y1 +y2 = f0*x2 + f3*x5 + f1*x1 + f4*x4 + f2*x0 + f5*x3

State 2 : At Time t = 0

Next sample x6 goes into delay line and oldest sample is flushed out of delay line.

P0 = {f0, f3} D1 = {x1, x4} P1 = {f1, f4}D2 = {x0, x3}P2 = {f2, f5} D3 = {x6, x2}Output from each sub filter

y0 = f0*x1 + f3*x4 y1 = f1*x0 + f4*x3 y2 = f2*x6 + f5*x2 Final ouputY(0) = y0 + y1 +y2 = f0*x1 + f3*x4 + f1*x0 + f4*x3 +f2*x6 + f5*x2

State 3 : At Time t = 1

Next sample x7 goes into delay line and oldest sample is flushed out of delay line.

P0 = {f0, f3} D1 = {x0, x3}P1 = {f1, f4} D2 = {x6, x2}P2 = {f2, f5} D3 = {x7, x1} Output from each sub filter y0 = f0*x0 + f3*x3 y1 = f1*x6 + f4*x2 y2 = f2*x7 + f5*x1

Final ouput Y(1) = y0 + y1 +y2 = f0*x0 + f3*x3 + f1*x6 + f4*x2 + f2*x7 + f5*x1

Change filter bank index If you change filter index at State 2 , you will have following configuration.

P0 becomes P2, P1 becomes P0 and P2 becomes P1.

P0 = {f2, f5} D1 = {x1, x4}P1 = {f0, f3} D2 = {x0, x3} P2 = {f1, f4} D3 = {x6, x2}

Output from each sub filter y0 = f2*x1 + f5*x4 y1 = f0*x0 + f3*x3 y2 = f1*x6 + f4*x2

Final ouputY(Advanced) = y0 + y1 + y2 = f2*x1 + f5*x4 + f0*x0 + f3*x3 +f1*x6 + f4*x2

Y(Advanced) is same as Y(1) except one term is diffrent.

Now is the time to think about Interpolation. Let’s understand this.

You are receiving 10 samples .
After matched filtering you get output samples of y0,y1,y2…y9

When you receive a timing error you want to get samples y0+delta,y1+delta etc…y9+delta. The “delta” is less than a sample.

What are the methods:
A) Take y0,y1,y2,y3…y9. Upsample by a factor of 100 (say). That is put 99 zero’s between y0 and y1 and y2 etc.
Now use a perfect interpolation filter and generate samples between y0 and y1 and between y1 and y2…upto y9. So,you have z0,z1,z2… z900. Decimate this output by 100 at the ‘correct’ phase to get desired output with the timing change.

B) But the operation above is wasteful in that it is generating samples we will throw away.That is we are generating z0 to z900, but we will use only say z2,z102,z202 etc.So why not only generate z2,z102…z202.

Ignore polyphase, how can you do this?

Notice that z2 = some filter operating on y0,y1,y2…y9.The zeros in between do not effect the output. So, z2 = filter2 operating on y0,y1,y2…y9
z3 = filter3 operating on y1,y2,y2…y9
where the coefficients of filter2 and filter3 are different. At this time, there is no glitch.

So what does this mean in terms of an algorithm:
When there is a timing change use a different set of coefficients to generate the output.

When the timing change is so much that you have ‘drop’ an input sample, or ‘zero stuff’ and input sample you have a glitch.

C) finally you can use the matched nyquist filter as an interpolation filter and do the matched filtering and interpolation in one go instead of a two step process described here. And the process of using different filter coefficients is done using a polyphase filter.

~~ cheers ~~ Dheeraj

ZC sequences and application in LTE

Zadoff-chu sequence is a polyphase sequence which is widely used in LTE for Primary synchronization signal,PRACH, PUCCH DMRS,PUSCH DMRS and sounding reference signals(SRS). This is because ZC sequence has the following properties.

  1. The Auto correlation of a prime length ZC sequence with a cyclic shifted version of itself has a zero auto correlation. It means that the auto correlation is nonzero only at one instant which corresponds to the cyclic shift. This also means that two generated sequences are orthogonal to each other. In communication systems use of orthogonal sequence is wide spread and using this property of ZC sequence, orthogonal sequences can be easily generated; just by cyclically shifting a ZC sequence.
  2. Another important property of ZC sequence is its circular crosscorrelation property. It can be stated as follows: “The absolute value of the cyclic crosscorrelation function between any two ZC sequences is constant and equal to 1/sqrt(N_ZC), if |u1u2|is relatively prime with respect to N_ZC.” Where, u1, and u2 are root indices and N_ZC is sequence length.

Attached are octave scripts that illustrate these properties.

https://secureservercdn.net/160.153.137.210/wvg.b75.myftpupload.com/wp-content/uploads/2019/10/Understanding-Zadoff-chu-sequence-1-_-Echoes.pdf

https://secureservercdn.net/160.153.137.210/wvg.b75.myftpupload.com/wp-content/uploads/2019/10/Cross-correlation-property-of-ZC-sequence-_-Echoes.pdf

Cheers

~Dheeraj

Least-Square circle fit and DC-offset estimation

42576total sites visits.

Zero-IF based radio receivers, because of phenomenon called “self-mixing” generate DC offset that can be much greater than the desired signal. Following are undesired effect of DC-offset:

Low level frequency amplifiers can be saturated by large DC offsets before the desired signal is amplified. The DC offset should be removed before the frequency correction of the received baseband signal can be performed, or the signal will turn into a tone at frequency correction with the AFC loop correcting the DC offset.

METHOD OF REMOVING DC OFFSET FOR A ZF-BASED GSM RADIO SOLUTION WITH DIGITAL FREQUENCY CORRELATION, US20040082302A1

Therefore, wireless communication systems that utilize Zero-IF radios have to overcome large DC offsets in the baseband signal.

In wireless communication systems where modulated symbols lie on a circle, the center of the circle can be seen as “DC-offset” and radius of the circle can be indicative of the power of the data signals. In case of no DC-offset, center of the circle lie on origin. And, any DC-offset displaces the the coordinates of the center of the circle (As can be seen from below figure).

Therefore, Problem of DC-offset estimation can be seen as finding the coordinates of center of the displaced circle. Once the coordinates of the center point are obtained, they can be subtracted from the received signal, thereby obtaining the corrected DC offset-free signal.

Below is octave script that estimates the center of circle based on Least square method.

% Create data for a circle + noise
th = linspace(0,2*pi,20)';
R=1.1111111;
sigma = R/10;
x = R*cos(th)+randn(size(th))*sigma;
y = R*sin(th)+randn(size(th))*sigma;
plot(x,y,'
o'), title(' measured points')
pause(1)

% Details and derivation of Least square circle fit algorithm
% here https://dtcenter.org/met/users/docs/write_ups/circle_fit.pdf

% coordinates of the barycenter
x_m = mean(x);
y_m = mean(y);

% calculation of the reduced coordinates
u = x - x_m;
v = y - y_m;

% linear system defining the center (uc, vc) in reduced coordinates:
%    Suu * uc +  Suv * vc = (Suuu + Suvv)/2
%    Suv * uc +  Svv * vc = (Suuv + Svvv)/2

uu = u.^2;
vv = v.^2;

Suv  = sum(u.*v);
Suu  = sum(u.^2);
Svv  = sum(v.^2);

Suuv = sum(uu.<em>v);
Suvv = sum(u.</em>vv);
Suuu = sum(u.^3);
Svvv = sum(v.^3);

% Solving the linear system
A = [ Suu, Suv; Suv, Svv];
B = [ (Suuu + Suvv)/2; (Svvv + Suuv)/2 ];
[ss] = A\B;

uc = ss(1);
vc = ss(2);
xc_hat = x_m + uc
yc_hat = y_m + vc

% radius
R_hat     = sqrt((x-xc_hat).^2 + (y-yc_hat).^2);

% reconstruct circle from data
xe = R_hat.*cos(th)+xc_1; ye = R_hat.*sin(th)+yc_1;
plot(x,y,'
o',[xe;xe(1)],[ye;ye(1)],'-.',R*cos(th),R*sin(th)),
title('
measured fitted and true circles')
legend('
measured','fitted','true')
text(xc-R*0.9,yc,sprintf('
center (%g , %g );  R=%g',xc,yc,Re))
xlabel x, ylabel y
axis equal
Result, without noise.
Result, with noise

Cheers

~Dheeraj

Views on 3GPP release-17 from RAN#84

It is sometimes hard to navigate through 3GPP documents. Hence I tried to parse documents that relates to views of various companies on scope of 3GPP Release-17 study and work Items. Very interesting documents that give a glimpse into future work.

~~Cheers

Dheeraj

To BEam or Not to BEam !

In my previous articles about Smart antenna and beamforming http://www.techplayon.com/smart-antennas-and-beamforming-understanding-with-gnu-part-2/ and http://www.techplayon.com/smart-antennas-beam-forming-understanding-gnu-part-3/

I talked about

  • The relationship between height of the antenna ans its ability to detect useful signal that is fainted due to propagation. A big antenna collects a lot of electromagnetic waves just like a big bucket collects a lot of rain. However, this solution of increasing height of the antenna or having a very big bucket to collect rain water is not practical.
  • Another approach to collect a lot of rain is to use many buckets rather than one large bucket. The advantage is that the buckets can be easily carried one at a time. Collecting electromagnetic waves works in a similar manner. “Many antennas can also be used to collect electromagnetic waves. If the output from these antennas is combined to enhance the total received signal, then the antenna is known as an antenna array. ”
  • Will it make difference in total collected water if many small buckets are arranged in a linear way or circular way ? will the distance between buckets make any difference ? Maybe not, but in case of antenna arrays, Geometry of antenna array, spacing between them matters.
  • A Smart system of collecting rain water is the one that changes as per the environment conditions. On similar lines, Smart antenna sytems are designed to adapt to a changing signal environment in order to optimize a given algorithm.

Antenna pattern consists of main-lobe, side-lobe and nulls. As shown in figure below:

The main-lobe is that portion of the pattern which has maximum intended radiation. The side-lobes are generally unintended radiation directions. This blog is an attempt to understand how to suppress side-lobes.

Recall that an Array factor can be represented in vector terms as follows:

One of the easiest way to suppress the side-lobes is to add weighting to array elements. Array weights can be chosen to minimize the side-lobes, to shape the side-lobes or placing a null at a specific angle.

Windows functions can provide array weights that can be used with linear arrays. Let’s see this from following octave example:

N = 8; % Number of array Elements
d = 0.5; % Array Element spacing
theta = -pi/2:.01:pi/2;
ang = theta*180/pi;

test = diag(rot90(pascal(N)));
wB = flipud(test(1:N/2));  wB = wB/max(wB);

% Weighted Array Factor
AF = 0;
tot = sum(wB);
for i = 1:N/2
AF = AF + wB(i)*cos((2*i-1)*pi*d*(sin(theta)));
end

% Normalised Array Factor
AFn = sin(N*pi*d*sin(theta))./(N*pi*d</em>sin(theta));

%----- Plot Results -----%
figure, plot(ang,abs(AF)/tot,'r', ang,abs(AFn),'k:')
xlabel('\theta (deg)'), ylabel('|AF|')
title('Binomial Weighted Array Factor vs. Angle')
axis([-90 90 0 1.1]), grid on
set(gca,'xTick',[-90:30:90])

Suppressed side-lobe can be seen in red plot corresponding to weighted array factor. Also, price paid to suppress the side-lobe was to broadening of main lobes.

Ref:

  1. Book-1: “Smart Antennas for Wireless Communications ” Frank B. Gross, PhD
  2. Book-2: ” “Antenna Arrays : A computational Approach by Randy L. Haupat” ”

~Cheers

Dheeraj

It’s just a “Phase Noise”–So don’t miss it!

42576total sites visits.

This blog tries to explain Phase noise and its effects in OFDM based communication systems like 5G-NR.

In wireless communication systems there is notion of carrier wave (or carrier). This carrier is modulated with signal that needs to be transmitted. suppose signal that needs to be transmitted is x(t) and carrier wave is A*cos(w0(t)). In real world, carrier wave is represented as A*cos(w0(t) + phi(t)). where phi(t) is phase noise. Because of this, in practical systems, we get not only x(t) around the w0(t) but we also see side-bands or spurs.

When we see noise spectrum of an oscillator. There are regions in which the flicker noise, 1/f noise dominates and other regions where the white noise from sources such as shot noise and thermal noise dominate.

Let us understand this by using an example in Octave. Below given is Octave script that is doing following tasks:

  • Generating a sine wave signal and plotting it.
  • Adding white noise to the phase of the signal and plotting it.
  • Adding 1/f noise (flicker noise) to the phase of the signal and plotting it.

clear all;
close all;
sigma = 1.2;
fsHz = 655360;
dt = 1/fsHz;
t = 0:dt:500*dt;
%t = 0:0.01:10;
signal0  = cos(2*pi*4*t) + sin(2*pi*4*t);  
signal1 = cos(2*pi*4*t+sigma *randn(1,length(t))) + ...
          sin(2*pi*4*t+sigma *randn(1,length(t)));;
% Generate 1/f noise (AWGN noise added over time)
noise = randn(1,length(t));
for i = 2 : length(t)
    noise(i) = noise(i) + noise(i-1);
end
signal2 = signal0 + noise;
% signal0 = original signal
% signal1 = signal0 + white noise added to phase
% signal2 = signal0 + 1/f noise added to phase

faxis = linspace(-fsHz/2,fsHz/2,length(t));
subplot(3, 1, 1);
plot(faxis/1000,fftshift(abs(fft(signal0))),'b','linewidth',1.5);
grid on;
title('subplot-1: original signal no noise');
xlabel('Frequency (KHz)')
faxis = linspace(-fsHz/2,fsHz/2,length(t));

subplot(3, 1, 2);
plot(faxis/1000,fftshift(abs(fft(signal1))),'r','linewidth',1.5);
grid on;
title('subplot-2: signal + AWGN added to phase');
xlabel('Frequency (KHz)')
faxis = linspace(-fsHz/2,fsHz/2,length(t));

subplot(3, 1, 3);
plot(faxis/1000,fftshift(abs(fft(signal2))),'k','linewidth',1.5);
grid on;
title('subplot-3: signal + 1/f noise added to phase');
xlabel('Frequency (KHz)')
  • Subplot-3 represents region where (1/f) noise dominates.Instead of a pure delta function, there is broadening of the spectrum near the carrier frequency.
  • Subplot-2 represents region where white noise dominates. There is fluctuations of the spectrum far from the carrier frequency.

OFDM baseband signal model considering phase noise

Ref: R1-163984

When the mismatch of oscillator frequencies between transmitter and receiver occurs, the frequency difference implies a shift of the received signal spectrum at the baseband. In OFDM, this creates a misalignment between the bins of FFT and the peaks of the sinc pulses of the received signal. This breaks orthogonality between the subcarriers so that results in a spectral leakage between them. Each subcarrier interferes with every other (although the effect is dominant between adjacent subcarriers), and as there are many subcarriers this is a random process equivalent to Gaussian noise. Thus, this frequency offset lowers the SINR of the receiver. An OFDM receiver will need to track and compensate phase noise.

The base-band received signal in the presence of only phase noise, assumed that there is no additive white Gaussian noise (AWGN), is given as the following equation:

where the transmitted signal is multiplied by a noisy carrier exp(jθ[n]).

The received signal is passed through the FFT in order to obtain the symbol transmitted on the m-th subcarrier in the OFDM symbol as follows:

Since the first term of the right hand side in (2) (i.e., mean of exp(jθ[n]) during one OFDM symbol duration) does not depend on subcarrier index m, it is called common phase error (CPE). This term causes common phase rotation in constellations of received symbols. The CPE can be estimated from the reference signals and removed.

And the second term corresponds to the summation of the information of the other sub-carriers each multiplied by some complex number which comes from an average of phase noise with a spectral shift. The result is also a complex number that is added to each sub-carrier’s useful signal and has the appearance of Gaussian noise. It is normally known as inter-carrier interference (ICI) or loss of orthogonallity.

Hence the phase noise can have two main impacts: one is that each subcarrier can be affected by a Common Phase Error (CPE) , which appears as the multiplication of the complex channel gain equally across all subcarriers; the other is the Inter-Carrier Interference (ICI) , which results in loss of orthogonality between subcarriers assuming OFDM waveform.

The ICI due to phase noise creates a fuzzy constellation as shown in Figure below:

~Peace

Dheeraj