Why the same VSSLMS algorithm code not working for the seisimic data?

here i have provided with code for an adaptive filter using VSSLMS(Variable Step Size Least Mean Square) Algorithm.I found this code posted in matlab.Here they have taken random values from normal distribution as input.And the cost function E(error^2) coverges to minimum.
% Desired System (link: http://www.firsuite.net/FIR/AKSOY08_NORCHIP_G40)
sys_desired = [86 -294 -287 -262 -120 140 438 641 613 276 -325 -1009 -1487 -1451 -680 856 2954 5206 7106 8192 8192 7106 5206 2954 856 -680 -1451 -1487 -1009 -325 276 613 641 438 140 -120 -262 -287 -294 86] * 2^(-15);
%% Ergodic Process
% Note that the uploaded Figure is obtained for itr=500
for itr=1:100
%% Defining input and initial Model Coefficients
%input
%a=importdata("G:\Project\BKHL.HHE.new.dat");
%x=a';
x=randn(1,60000);
% Model for the LMS Algorithm
model_coeff = zeros(1,length(sys_desired));
% Model for the VSS-LMS Algorithm
model_coeff_vss = zeros(1,length(sys_desired));
%% Initial Values of Model Tap
model_tap = zeros(1,length(sys_desired));
%% System Output where a 40 dB Noise floor is added
noise_floor = 40;
sys_opt = filter(sys_desired,1,x)+awgn(x,noise_floor)-x;
%% Lower and Upper Bounds of Learning Rate
%%%%%%%% These above informations can be accessed from this paper:
%%% R. H. Kwong and E. W. Johnston, "A variable step size LMS algorithm," in IEEE Transactions on Signal Processing, vol. 40, no. 7, pp. 1633-1642, July 1992.
%%%doi: 10.1109/78.143435
%%%%%%%%%%%%%%%%%%
%input variance
input_var = var(x);
% upper bound = 1/(filter_length * input variance)
mu_max = 1/(input_var*length(sys_desired));
%learning rate for LMS algorithm
mu_LMS = 0.0004;
% lower bound = learning rate for LMS algorithm
mu_min = mu_LMS;
%%%%%%%%%%%%%%% NOTE %%%%%%%%%%%%%%%%%%%%%%%%%
% If the difference between mu_max and mu_min is not sufficient
% Error Curves for both LMS and VSS LMS algorithms would be IDENTICAL
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Defining initial parameters for VSS-LMS algorithm
mu_VSS(1)=1; %initial value of mu for VSS
alpha = 0.97;
gamma = 4.8e-4;
%Note that these values were defined in the above paper
%% Execution of Both LMS and VSS-LMS algorithms
for i=1:length(x)
%% LMS Algorithm
% model tap values (shifting of tap values by one sample to right)
model_tap=[x(i) model_tap(1:end-1)];
% model output
model_out(i) = model_tap * model_coeff';
%error
e_LMS(i)=sys_opt(i)-model_out(i);
%Updating the coefficients
model_coeff = model_coeff + mu_LMS * e_LMS(i) * model_tap;
%% VSS LMS Algorithm
% model output
model_out_vss(i) = model_tap * model_coeff_vss';
% error
e_vss(i) = sys_opt(i) - model_out_vss(i);
%Updating the coefficients
model_coeff_vss = model_coeff_vss + mu_VSS(i) * e_vss(i) * model_tap;
%Updating the mu value using the VSS algorithm
mu_VSS(i+1) = alpha * mu_VSS(i) + gamma * e_vss(i) * e_vss(i) ;
%% Checking the constraints of mu as given in the paper
if (mu_VSS(i+1)>mu_max)
mu_VSS(i+1)=mu_max;%max
elseif(mu_VSS(i+1)<mu_min)
mu_VSS(i+1)= mu_min;
else
mu_VSS(i+1) = mu_VSS(i+1) ;
end
end
%% Storing the e_square values after a whole run of LMS and VSS LMS algorithms
err_LMS(itr,:) = e_LMS.^2;
err_VSS(itr,:) = e_vss.^2;
%% Printing the iteration number
clc
disp(char(strcat('iteration no : ',{' '}, num2str(itr) )))
end
%% Comparing the Error Curves
figure;
plot(10*log10(mean(err_LMS)),'-b');
hold on;
plot(10*log10(mean(err_VSS)), '-r');
title('Comparison of LMS and VSS LMS Algorithms'); xlabel('iterations');ylabel('MSE(dB)');legend('LMS Algorithm','VSS LMS Algorithm')
grid on;
this is the output
But i am not getting the same covergence for the seisimic data(acceleration data) attached here.the excel file was converted to .dat file.
output which i got for the seismic data is given below
why am i not getting the convergence?

 Risposta accettata

hello
well, it is a bit converging with your data , but it's not "as good" as with a truly stationnary and ergodic signals we prefer to use for system identification (white noise , random signal, which is the same)
why use seismic data for that task ???
also note that in the original code , the random noise input is updated at each iteration while with your signal we can only run 1 iteration; it makes the reading of the convergence much more difficult as there is no averaging effect
your signal has also a quantization problem : the signal is quite "staircase" as the max peak of signal is only coded on 10 levels
this is also a limiting factor of LMS algorithm performance
try to see if you can get a better signal with a analog preamp before signals gets digitized
With random signal , the LMS and VSSLMS identified models are perfectly overlaid (because we have the "best" input signal)
with your signal and one single iteration, there is still a convergence but not complete; you can try to use a longer excitation signal or increase the mu gain
this plot is obtained with the same convergence gains as in the original code :
now we can make the convergence much faster : i increased the gain for both algos by a factor 100
%learning rate for LMS algorithm
mu_LMS = 0.0004*100;
and
%% Defining initial parameters for VSS-LMS algorithm
mu_VSS(1)=1; %initial value of mu for VSS
alpha = 0.97;
gamma = 4.8e-4*100;
and we got almost the "perfect" results even though the signal is not really 100% appropriate for the tasks
and the convergence curve :
this is in the updated code below :
% Desired System (link: http://www.firsuite.net/FIR/AKSOY08_NORCHIP_G40)
sys_desired = [86 -294 -287 -262 -120 140 438 641 613 276 -325 -1009 -1487 -1451 -680 856 2954 5206 7106 8192 8192 7106 5206 2954 856 -680 -1451 -1487 -1009 -325 276 613 641 438 140 -120 -262 -287 -294 86] * 2^(-15);
%% Ergodic Process
% Note that the uploaded Figure is obtained for itr=500
x = readmatrix('BKHL.HHZ.new.csv');
x = x./max(abs(x)); % normalize the data to +/-1 range (optionnal)
x = x';
for itr=1:10%:2
%% Defining input and initial Model Coefficients
%input
%x=randn(1,60000);
% Model for the LMS Algorithm
model_coeff = zeros(1,length(sys_desired));
% Model for the VSS-LMS Algorithm
model_coeff_vss = zeros(1,length(sys_desired));
%% Initial Values of Model Tap
model_tap = zeros(1,length(sys_desired));
%% System Output where a 40 dB Noise floor is added
noise_floor = 40;
d = randn(size(x))*10^(-noise_floor/20);
% sys_opt = filter(sys_desired,1,x)+awgn(x,noise_floor)-x;
sys_opt = filter(sys_desired,1,x)+d;
%% Lower and Upper Bounds of Learning Rate
%%%%%%%% These above informations can be accessed from this paper:
%%% R. H. Kwong and E. W. Johnston, "A variable step size LMS algorithm," in IEEE Transactions on Signal Processing, vol. 40, no. 7, pp. 1633-1642, July 1992.
%%%doi: 10.1109/78.143435
%%%%%%%%%%%%%%%%%%
%input variance
input_var = var(x);
% upper bound = 1/(filter_length * input variance)
mu_max = 1/(input_var*length(sys_desired));
%learning rate for LMS algorithm
mu_LMS = 0.0004*100;
% lower bound = learning rate for LMS algorithm
mu_min = mu_LMS;
%%%%%%%%%%%%%%% NOTE %%%%%%%%%%%%%%%%%%%%%%%%%
% If the difference between mu_max and mu_min is not sufficient
% Error Curves for both LMS and VSS LMS algorithms would be IDENTICAL
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Defining initial parameters for VSS-LMS algorithm
mu_VSS(1)=1; %initial value of mu for VSS
alpha = 0.97;
gamma = 4.8e-4*100;
%Note that these values were defined in the above paper
%% Execution of Both LMS and VSS-LMS algorithms
for i=1:length(x)
%% LMS Algorithm
% model tap values (shifting of tap values by one sample to right)
model_tap=[x(i) model_tap(1:end-1)];
% model output
model_out(i) = model_tap * model_coeff';
%error
e_LMS(i)=sys_opt(i)-model_out(i);
%Updating the coefficients
model_coeff = model_coeff + mu_LMS * e_LMS(i) * model_tap;
%% VSS LMS Algorithm
% model output
model_out_vss(i) = model_tap * model_coeff_vss';
% error
e_vss(i) = sys_opt(i) - model_out_vss(i);
%Updating the coefficients
model_coeff_vss = model_coeff_vss + mu_VSS(i) * e_vss(i) * model_tap;
%Updating the mu value using the VSS algorithm
mu_VSS(i+1) = alpha * mu_VSS(i) + gamma * e_vss(i) * e_vss(i) ;
%% Checking the constraints of mu as given in the paper
if (mu_VSS(i+1)>mu_max)
mu_VSS(i+1)=mu_max;%max
elseif(mu_VSS(i+1)<mu_min)
mu_VSS(i+1)= mu_min;
else
mu_VSS(i+1) = mu_VSS(i+1) ;
end
end
%% Storing the e_square values after a whole run of LMS and VSS LMS algorithms
err_LMS(itr,:) = e_LMS.^2;
err_VSS(itr,:) = e_vss.^2;
%% Printing the iteration number
clc
disp(char(strcat('iteration no : ',{' '}, num2str(itr) )))
end
%% Comparing the Error Curves
figure;
plot(10*log10(mean(err_LMS,1)),'-b');
hold on;
plot(10*log10(mean(err_VSS,1)), '-r');
title('Comparison of LMS and VSS LMS Algorithms'); xlabel('iterations');ylabel('MSE(dB)');legend('LMS Algorithm','VSS LMS Algorithm')
grid on;
figure;
plot(sys_desired,'-*b');
hold on;
plot(model_coeff, '-*r');
plot(model_coeff_vss, '-*m');
legend('FIR model','LMS FIR ID','VSSLMS FIR ID');

4 Commenti

thankyou so much @Mathieu NOE
Actually i was trying to implement the activity carried out in a paper titled 'Framework for Automated Earthquake Event detection Based on Denoising by Adpative Filter'
Here they say that they analysed the filter using seismic data
i will attach the whole paper along too.
though the algorithm is a variant of VSSLMS,I tried to analyse this first with the VSSLMS algorithm
hello again
yes , adaptive LMS filters are alsoe used to signal denoising, but it's not the purpose of the matlab code you provided - this code is about identification of a system transfer but use of input / output signals from the system to be identified.
hello
thankyou @Mathieu NOE for responding.
Actually i was going through a paper.in that paper they have proposed new algorithm(variant of VSSLMS).so in that paper they have analysed the performce of adaptive filter algorithm.From that paper what i have understood is that performance was analysed using seismic data as input.
my doubts are
1)arent we have to use the system indentification code for analysing the performance of adaptive filter.
2)can we analyse the performance of adaptive filter using seismic data as input.
it will be of great help for me.kindly help me.
you can find the paper in this link given along
https://in.mathworks.com/matlabcentral/answers/1886307-can-anyone-help-me-in-understanding-an-algorithm-and-tell-me-where-am-i-wrong-in-simulating-the-adap
I have read the publication as well , but not as deeply as you (I am on this forum only for limited time periods , so I cannot spend hours reading publications in details)
so far I have understood the paper deals with data denoising which is not what the matlab code does. As I aleady mentionned , the code is for system identification, but that is not what we want for seismic data analysis . the paper simply compares different adaptive filters codes for denoising , not system identification purpose.
what I think would have been interesting is that the publication gives a bit more info(s on how does the data look like before and after denoising
maybe you should take contact with the authors ?

Accedi per commentare.

Più risposte (1)

thankyou @Mathieu NOE again.
so is the paper comparing diffferent adaptive algorithms based on system identification code or denoising code?
From my reading, I was not able to decipher which code they have used for comparing (system identification or denoising code),though their purpose is denoising.
i tried to contact them and i am expecting their reply.
kindly clear my doubt.

13 Commenti

hello again
adaptive digital filters are commonly used for many tasks :
  • denoising (your project)
  • system / model identification (the matlab code you posted)
  • system control (see example below, identification of a model then control (rejection of disturbances))
%--------------------------------------------------------------------------
% One of my friends asked me about the FxLMS algorithm. So, in return,
% I provided him a little example of a single channel feed-forward active
% noise control system based on the FxLMS. You can find many good
% information in "Active Noise Control Systems - Algorithms and DSP
% Implementations," written by S. M. Kuo and D. R. Morgan in 1996.
%
% Here is the sketch of the system.
%
% +-----------+ +
% x(k) ---+--->| P(z) |--yp(k)----------------> sum --+---> e(k)
% | +-----------+ ^- |
% | | |
% | \ ys(k) |
% | +-----------+ +-----------+ | |
% +--->| C(z) |--yw(k)-->| S(z) |---+ |
% | +-----------+ +-----------+ |
% | \ |
% | \----------------\ |
% | \ |
% | +-----------+ +-----------+ |
% +--->| Sh(z) |--xs(k)-->| LMS |<-------+
% +-----------+ +-----------+
%
% I used FIR filter to model P(z), C(z), S(z), and Sh(z).
%
% Imagine that the noise x(k) is propagating from the source to the sensor,
% through the fluid medium P(z). The sensor measures the arriving noise as
% yp(k).
%
% To reduce noise, we generate another 'noise' yw(k) using the controller
% C(z). We hope that it destructively interferes x(k). It means that the
% controller has to be a model of the propagation medium P(z). Least mean
% square algorithm is applied to adjust the controller coefficient/weight.
%
% However, there is also fluid medium S(z) that stay between the actuator
% and sensor. We called it the secondary propagation path. So, to make the
% solusion right, we need to compensate the adjustment process using Sh(z),
% which is an estimate of S(z).
%
% Let's start the code :)
%
% Developed by Agustinus Oey <oeyaugust@gmail.com>
% Center of Noise and Vibration Control (NoViC)
% Department of Mechanical Engineering
% Korea Advanced Institute of Science and Technology (KAIST)
% Daejeon, South Korea
%--------------------------------------------------------------------------
% Set simulation duration (normalized)
clear
close all
T=1000;
% We do not know P(z) and S(z) in reality. So we have to make dummy paths
Pw=[0.01 0.25 0.5 1 0.5 0.25 0.01];
Sw=Pw*0.25;
% Remember that the first task is to estimate S(z). So, we can generate a
% white noise signal,
x_iden=randn(1,T);
% send it to the actuator, and measure it at the sensor position,
y_iden=filter(Sw, 1, x_iden);
% Then, start the identification process
Shx=zeros(1,16); % the state of Sh(z)
Shw=zeros(1,16); % the weight of Sh(z)
e_iden=zeros(1,T); % data buffer for the identification error
% and apply least mean square algorithm
mu=0.05; % learning rate
for k=1:T, % discrete time k
Shx=[x_iden(k) Shx(1:15)]; % update the state
Shy=sum(Shx.*Shw); % calculate output of Sh(z)
e_iden(k)=y_iden(k)-Shy; % calculate error
Shw=Shw+mu*e_iden(k)*Shx; % adjust the weight
end
% Lets check the result
figure(1)
subplot(2,1,1)
plot([1:T], e_iden)
ylabel('Amplitude');
xlabel('Discrete time k');
legend('Identification error');
subplot(2,1,2)
stem(Sw)
hold on
stem(Shw, 'r*')
ylabel('Amplitude');
xlabel('Numbering of filter tap');
legend('Coefficients of S(z)', 'Coefficients of Sh(z)')
hold off
% The second task is the active control itself. Again, we need to simulate
% the actual condition. In practice, it should be an iterative process of
% 'measure', 'control', and 'adjust'; sample by sample. Now, let's generate
% the noise:
X=randn(1,T);
% and measure the arriving noise at the sensor position,
Yd=filter(Pw, 1, X);
% Initiate the system,
Cx=zeros(1,16); % the state of C(z)
Cw=zeros(1,16); % the weight of C(z)
Sx=zeros(size(Sw)); % the dummy state for the secondary path
e_cont=zeros(1,T); % data buffer for the control error
Xhx=zeros(1,16); % the state of the filtered x(k)
% and apply the FxLMS algorithm
mu=0.05; % learning rate
for k=1:T, % discrete time k
Cx=[X(k) Cx(1:15)]; % update the controller state
Cy=sum(Cx.*Cw); % calculate the controller output
Sx=[Cy Sx(1:length(Sx)-1)]; % propagate to secondary path
e_cont(k)=Yd(k)-sum(Sx.*Sw); % measure the residue
Shx=[X(k) Shx(1:15)]; % update the state of Sh(z)
Xhx=[sum(Shx.*Shw) Xhx(1:15)]; % calculate the filtered x(k)
Cw=Cw+mu*e_cont(k)*Xhx; % adjust the controller weight
end
% Report the result
figure(2)
subplot(2,1,1)
plot([1:T], e_cont)
ylabel('Amplitude');
xlabel('Discrete time k');
legend('Noise residue')
subplot(2,1,2)
plot([1:T], Yd)
hold on
plot([1:T], Yd-e_cont, 'r:')
ylabel('Amplitude');
xlabel('Discrete time k');
legend('Noise signal', 'Control signal')
hold off
Pardon.i didnt get you by the below statement.
as my input data contains around 30,000 values.so is it not updating?
thankyou
hello
the original code with x (random signal) (and not your seismic data) generates a new x random sequence (different at each iteration).
for itr=1:100
%% Defining input and initial Model Coefficients
%input
%a=importdata("G:\Project\BKHL.HHE.new.dat");
%x=a';
x=randn(1,60000); % is a different x at each iteration !
So when you display the mean of the errors , you get a 'smoother' error curve
when you use the seismic data, you use always the same data at each iteration.... so it's a different scenario and you don't benefit for averaging the results by making multiple iterations.
how can i convert this code for the purpose of denoising.Will the below addition of function syntax make it denoising?
function[model_out_evss,e_evss]=myEVSSLMS(y)
sys_desired = [86 -294 -287 -262 -120 140 438 641 613 276 -325 -1009 -1487 -1451 -680 856 2954 5206 7106 8192 8192 7106 5206 2954 856 -680 -1451 -1487 -1009 -325 276 613 641 438 140 -120 -262 -287 -294 86] * 2^(-15);
%sys_desired = [-3 0 19 32 19 0 -3]*2^(-15);
%a=importdata("G:\Project\BKHL.HHE.new.dat");
%digfilt = designfilt('lowpassfir', 'PassbandFrequency', 10, 'StopbandFrequency',20,'SampleRate', 100);
%sys_desired = digfilt.Coefficients;
length_fir = length(sys_desired);
for itr=1:100
%% Defining input and initial Model Coefficients
%input
x=y';
%a=importdata("G:\Project\BKHL.HHE.new.dat");
%x=a';
%% EVSS LMS
model_coeff_evss = zeros(1,length_fir);
%% Initial Values of Model Tap
model_tap = zeros(1,length_fir);
%% System Output where a 40 dB Noise floor is added
noise_floor = 40;
d = randn(size(x))*10^(-noise_floor/20);
sys_opt = filter(sys_desired,1,x)+d;
mu_min = 0.007;
% input variance
input_var = var(x);
% upper bound = 1/(filter_length * input variance)
mu_max = 1/(input_var*length_fir);
%% Defining initial parameters for EVSS-LMS algorithm
mu_EVSS(1) = 0.025;
mu_EXTRA = 1.5*10^(-4);
%% EVSS LMS ALGORITHM
for i=1:length(x)
% model tap values (shifting of tap values by one sample to right)
model_tap=[x(i) model_tap(1:end-1)];
% model output
model_out_evss(i) = model_tap * model_coeff_evss';
% error
e_evss(i) = sys_opt(i) - model_out_evss(i);
%Updating the coefficients
model_coeff_evss = model_coeff_evss + mu_EVSS(i) * e_evss(i) * model_tap;
if mu_EVSS(i)>mu_min
c=1;
else mu_EVSS(i)<=mu_min;
c=2^(-5); % <= here
mu_EVSS(i) = mu_min; % <= here (missing line)
end
if mu_EVSS(i)>mu_max
mu_EVSS(i) = mu_max; % <= here (missing line)
end
mu_EVSS(i+1) = c * mu_EVSS(i) + mu_EXTRA * sign(e_evss(i));
end
%% Storing the e_square values after a whole run of VSS LMS algorithm
err_EVSS(itr,:) = e_evss.^2;
%% Printing the iteration number
clc
disp(char(strcat('iteration no : ',{' '}, num2str(itr) )))
end
figure;
plot(10*log10(mean(err_EVSS,1)),'-b');
title('VSS LMS Algorithms'); xlabel('iterations');ylabel('MSE(dB)');
grid on;
figure;
plot(sys_desired,'-*b');
hold on;
plot(model_coeff_evss, '-*m');
legend('FIR model','VSSLMS FIR ID');
end
Some publications dealing with noise reduction of signals assumes that the "noise" itself is somehow measurable separately from the total signal (and that is true if we speaks about main power hums or alike measurable perturbations)
but for a earthquake signal, we get only one signal and no separate "noise" source .
the only LMS similar application I am aware of is the so called "adaptive line enhancer" that tries to remove the unstationnary portion of the signal (i.e. the noise) and extract (filter out) the stationnary portion of it (the clean signal we want).
below is a demo of a noisy sine wave that can be "cleaned" , without having access to the "noise" signal.
You can see the algorithm below is very similar to yours, simply the mu gain is here constant, but you could make it variable if you wish :
%% Adaptive Line Enhancement Using Least Mean Square (LMS) Algorithm
%% Start
clc;
clear all;
close all;
%% Generating sample signal to be filtered
f=10; % frequency of the sinusoidal signal
fs=50*f; % sampling frequency
ts=1/fs; % sampling time
amp=5; % amplitude of the sinusoidal
N = 300; % Number of samples
t=(0:N-1)*ts; % Time instant
SNR = 0.2; % Noise level
sig = amp*sin(2*pi*f*t); % Generating sinusoidal signal
x = sig+SNR*amp*randn(size(sig)); % Addition of white gaussian noise of defined SNR
%% LMS parameter
iter = 25; % Number of iterations
fir_length=16; % tap delays / order of filter
U = zeros(1,fir_length+1); % Input frame length
mu = 1e-5; % Learning rate for LMS
W = 0*U; % Initial weights of LMS
for k = 1 : iter
for n = 1 : N-fir_length
U(1,2:end) = U(1,1:end-1); % Shifting of frame window
U(1,1) = x(n); % Input
% LMS update
y = (W)*U'; % Output of LMS
e = x(n+fir_length) - y; % Instantaneous error between predicted and desired signal (NB : x is delayed)
W = W + mu * e * U; % Weight update
J(k,n) = e'*e; % Instantaneuous squared error
end
end
CF = mean(J,2); % Cost Function (CF): Mean squared error of LMS
%% Plotting input, output, and desired signals
figure
z=filter(W,1,x);
plot(t,sig,'b',t,x,'r',t,z,'k')
legend('Noise free signal','Noisy signal','Filtered signal');
%% Plotting Cost Function (MSE)
figure
plot(10*log10(CF))
title('Cost Function: Mean squared error (MSE)');
xlabel('epochs iterations');
ylabel('MSE (dB)');
grid minor
now if you test that algorithm with your noisy earthquake signal, it does not provide any benefit cause your signal is noise + an unstationnary (earthquake) signal
again, I already mentionned that your record has poor quality as we can clearly see the quantization effect here a zoom on the signal - the max amplitude is +/- 0.6 (g?) and your resolution level is only 0.06 , so all your signal is quantized on 10 levels which is very low (in comparison, a 16 bits audio signal is quantized on +/- 32768 levels)
before applying the LMS filter (which I doubt can really do much on this signal) there is some improvement to do on the sensor analog signal (better sensivity , increase gain before signal get's sampled)
I guess that the "real" earthquake is the tiny oscillation buried in noise, around t = 1 ,
the rest of the time we have only noise , but the LMS ALE could not really do much on a transient signal as I expected.
the same code with your signal did not bring any benefit, whatever the mu gain I choose, I could not really extract the earthquake wave from the quantization noise
%% Adaptive Line Enhancement Using Least Mean Square (LMS) Algorithm
%% Start
clc;
clear all;
close all;
%% Generating sample signal to be filtered
x = readmatrix('BKHL.HHZ.new.csv');
x = x';
ts=1; % sampling time
N = length(x); % Number of samples
t=(0:N-1)*ts; % Time instant
%% LMS parameter
iter = 25; % Number of iterations
fir_length=16; % tap delays / order of filter
U = zeros(1,fir_length+1); % Input frame length
mu = 1e-3; % Learning rate for LMS
W = 0*U; % Initial weights of LMS
for k = 1 : iter
for n = 1 : N-fir_length
U(1,2:end) = U(1,1:end-1); % Shifting of frame window
U(1,1) = x(n); % Input
% LMS update
y = (W)*U'; % Output of LMS
e = x(n+fir_length) - y; % Instantaneous error between predicted and desired signal (NB : x is delayed)
W = W + mu * e * U; % Weight update
J(k,n) = e'*e; % Instantaneuous squared error
end
end
CF = mean(J,2); % Cost Function (CF): Mean squared error of LMS
%% Plotting input, output, and desired signals
figure
z=filter(W,1,x);
plot(t,x,'r',t,z,'k')
legend('Noisy signal','Filtered signal');
%% Plotting Cost Function (MSE)
figure
plot(10*log10(CF))
title('Cost Function: Mean squared error (MSE)');
xlabel('epochs iterations');
ylabel('MSE (dB)');
grid minor
i think the adaptive configuration for the mentioned paper is Equaliser
this is a screenshot from a website which tells about adaptive filters.From the marked words that you can see in the above picture,what I understood is that i have to train the proposed adaptive algorithm with 'x=randn(1,6000)' to match the characteristics of the communication channel (AWGN channel).
and then we have to filter our seismic dataset with the obtained coefficients from the algorithm
an excerpt of the paper 'framwork of automated earthquake event detection based on denoising by Adaptive Filter' is given below.
correct me if what i understood is wrong.
hello
the equalizer topic is pretty much the same thing as "unknow" system identification I already mentionned earlier in this post
as you can see from the top picture , you need access to the input and output signals of the system to be identified / equalized
so not really signal "denoising"
you can see lot of denoising with adaptive filters on internet , but I was a bit deceived to see most of the authors assue there is always a way to have access to noise source (plus the noisy signal to be cleaned), this way you can use both data with a standard LMS scheme to recover the original signal; but this is not our scenario here
look for "adaptive line enhancement" on internet
Adaptive line enhancer (ALE) is an adaptive filter, it is a special type of adaptive noise cancellation (ANC) filter in which only one signal (noisy) is used to filter out the desired signal. As we known that the noise is uncorrelated to itself and ideally the auto correlation of noise is zero. The ALE filter the noise by correlating the noisy signal with its delayed version.
but we will not get the expected filtered output .right @Mathieu NOE
sorry, I am not sure
what is your question ?
its okay..doubt is cleared..
thankyou so much@Mathieu NOE

Accedi per commentare.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by