# How to prepare input for the MUSIC DOA algorithm using real world audio signal (NOT simulation)

18 visualizzazioni (ultimi 30 giorni)
tachung chi il 15 Gen 2021
Hi,
I am working on using the MUSIC algorithm to calculate DOA of ultrasonic sound input. I have the configuration of the microphone array and the number of microphone is 7. Therefore, the received audio signal would be something like [7xN] in real values, where N is the number of samples collected in a fixed time duration.
Now, I am able to define the phased.URA object using the array configuration (microphone spacing, positions..and so on). The problem is: I do not know how to convert the input signal into the matrix generated by the collectPlaneWave function in the demo. Specifically, the output of collectPlaneWave is a complex matrix while my audio input is a [7xN] real-valued matrix.
##### 0 CommentiMostra -2 commenti meno recentiNascondi -2 commenti meno recenti

Accedi per commentare.

### Risposte (2)

Priyanka Rai il 2 Feb 2021
##### 0 CommentiMostra -2 commenti meno recentiNascondi -2 commenti meno recenti

Accedi per commentare.

Colin Waters il 24 Mar 2022
No, that link does not help. The description fails to specify the data types (real, complex) and matrix shapes for the variables, e.g. x(t).
There is a way to get the complex x(t) matrix (hence complex covariance matrix) if the signal is narow band and plane wave. Assume you have a plane wave (e^i[k dot r - wt] model) propagating across the sensor array and you have them placed to avoid spatial aliasing etc.
For each of the (real) time series from each sensor, choose one sensor as a reference. You can do an FFT to get the frequency of the narrow band signal. Then do the cross spectrum for each pair, using the reference sensor time series. This gets you the phase differences at the relevant frequency between the reference and each of the other sensor time series. This phase difference is the k dot r for each sensor pair and you know the r vector from the location of the sensors, measured from the reference sensor.
You can then generate the complex time series at each sensor, using the phase differences as k dot r in the signal model. Then you can get the complex covariance etc.
##### 9 CommentiMostra 7 commenti meno recentiNascondi 7 commenti meno recenti
Colin Waters il 11 Mag 2022
Firstly, an FFT of 1024 input real values only gives 512 unique complex values, the other 512 are complex conjugate so add nothing new.
Secondly, choose one frequency bin. The covariance matrix for that one frequency is a 6 x 6 (complex) matrix - called the 'spectral matrix' by some.
Put the 6 complex values from the same frequency bin at all 6 microphone data streams (the FFT of these) into a 6 x 1 complex vector. Then calculate the outer product to get the spectral matrix ( 6 x 6 complex). The diagonal elements should be real values.
Remya Prabhakaran il 18 Mag 2022
I tried different ways to implement the same you mentioned. Iam attaching my code here will you please help me to find what is wrong here.
clear all;
close all;
clc
frame_1024 = audio111(1:1024,:);
N=6; %% number of elements
FS=16000; %% sampling rate
L=1024; %number of samples
Fn =FS/2;
fft_sig = fft(frame_1024);
abs_fft_sig = abs(fft_sig);
energy_db = 20*log(abs_fft_sig);
minimum_db = -50; %% setting a minimum energy
Fv = linspace(0, 1, fix(L/2)+1)*Fn;
Iv = 1:numel(Fv);
for k1=1:6
for k =1:513
if(energy_db(k,k1) < minimum_db ) %% comparing the energy
gain =0;
else
gain =1;
end
abs_fft_sig(k,k1)=gain*abs_fft_sig(k,k1);
end
end
%% Finding peaks for each channel to find the dominant frequency
[pks1,locs1] = findpeaks((abs_fft_sig(1:513,1)));
[pks2,locs2] = findpeaks((abs_fft_sig(1:513,2)));
[pks3,locs3] = findpeaks((abs_fft_sig(1:513,3)));
[pks4,locs4] = findpeaks((abs_fft_sig(1:513,4)));
[pks5,locs5] = findpeaks((abs_fft_sig(1:513,5)));
[pks6,locs6] = findpeaks((abs_fft_sig(1:513,6)));
a=Fv(locs1)
b=Fv(locs2)
c1=Fv(locs3)
d=Fv(locs4)
e=Fv(locs5)
f=Fv(locs6)
%% Spectral matrix creation
mat_creation = fft_sig(4,:);
mat = (mat_creation)';
spectral_matrix = mat *(mat_creation);
fc= 156.25; %% common peak from each channel
%% Eigen value decomposition
[eigenvec eigenval] = eigs(spectral_matrix,N)
eigv=eigenvec(:,1+1:6)
c=343;
lambda = c/fc;
p=zeros(361,361);
%% P music spectrum for UCA
ph =(0:0.5:360);
th =(0:0.5:360);
for i = 1:length(th)
for j = 1:length(ph)
k=1:1:1*N;
theta=th(i)*pi/180;
phi = ph(j)*pi/180;
% phase = exp(-1j*2*pi*0.03864*(0:N-1)'*sind(theta)/lambda);
phase =((2*pi*0.03864)/lambda)*(sin(theta)*cos(phi-((2*pi*(k'))/N)));
aa=exp(1i*phase);
p(i,j)=10*log(abs(1/((aa'*eigv)*(eigv'*aa))));
end
end
[f,idx]=maxk(p(:),1);
doa=ph(ceil(idx/length(th)));
doa1=th(mod(idx,length(ph)));

Accedi per commentare.

### Categorie

Scopri di più su Direction of Arrival Estimation in Help Center e File Exchange

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by