Azzera filtri
Azzera filtri

Question regarding implementation of source code for Generalised regression neural network (newgrnn)

3 visualizzazioni (ultimi 30 giorni)
I was comparing the source code from the function newgrnn vs the original implementation as stated in Specht (1991).
In Specht's paper, he denotes the estimate as .
Noting specifically from the choice of kernal estimator that inside the brackets is . In the source code for the GRNN, in lines 115 to 127, it shows:
% Simulation
net.inputs{1}.size = R;
net.layers{1}.size = Q;
net.inputWeights{1,1}.weightFcn = 'dist';
net.layers{1}.netInputFcn = 'netprod';
net.layers{1}.transferFcn = 'radbasn';
net.layers{2}.size = S;
net.layerWeights{2,1}.weightFcn = 'dotprod';
% Weight and Bias Values
net.b{1} = zeros(Q,1)+sqrt(-log(.5))/param.spread;
net.iw{1,1} = p';
net.lw{2,1} = t;
I am trying to figure out why this code differs from Spechts paper with regards to
net.b{1} = zeros(Q,1)+sqrt(-log(.5))/param.spread;
% And not
net.b{1} = zeros(Q,1)+sqrt(1/2)/param.spread; % as it is indicated from Spechts paper.
A quick and dirty simulation with a hand created version (1d only) and the GRNN function in matlab demonstrates this point:
clear all
close all
x = (-2:1:2)';
x = normalize(x);
y =[-1,-1,-1,0,1]';
sigma=1;
net = newgrnn(x',y',sigma);
testx = (-4:.1:4)';
prediction_newgrnn = net(testx')';
% Own simulation of GRNN from Spechts paper
output_v2 = grnn_own_v2(x,y,sigma,testx);
output_v3 = grnn_own_v3(x,y,sigma,testx);
figure
plot(testx,prediction_newgrnn)
hold on
plot(testx,output_v2,'Color','red');
% Differing constants plot
figure
plot(testx,prediction_newgrnn)
hold on
plot(testx,output_v3,'Color','red');
% Differing constants plot
function output1 = grnn_own_v2(x,y,sigma,testx)
for j = 1:length(testx)
for i = 1:length(x)
d(i) = transpose(testx(j) - x(i))*(testx(j) - x(i));
exp_vec(i) = exp(-d(i)/((2*sigma^2)));
end
output1(j) = dot(y,exp_vec)/sum(exp_vec);
clear d
clear exp_vec
end
end
function output2 = grnn_own_v3(x,y,sigma,testx)
for j = 1:length(testx)
for i = 1:length(x)
d(i) = transpose(testx(j) - x(i))*(testx(j) - x(i));
exp_vec(i) = exp(-(-log(.5))*d(i)/((sigma^2)));
end
output2(j) = dot(y,exp_vec)/sum(exp_vec);
clear d
clear exp_vec
end
end
So the question remains, why is the numerator in the bias term
sqrt(-log(.5))
and not 1/sqrt(2)?
Many thanks for reading :).

Risposte (1)

Aditya
Aditya il 15 Apr 2024
The use of sqrt(-log(0.5)) instead of 1/sqrt(2) is a specific choice that relates to how the bias influences the shape and spread of the radial basis function in the neural network. It ensures that the neuron's activation decreases to 0.5 at a distance determined by the spread parameter, aligning with the desired behavior of the RBF neuron. This choice is mathematically justified and aligns with the goal of controlling the neuron's response relative to the spread parameter.
  1 Commento
George
George il 15 Apr 2024
Many thanks for replying Aditya.
"This choice is mathematically justified and aligns with the goal of controlling the neuron's response relative to the spread parameter."
May I ask that you send/point me in the reference for this mathematical justification you are talking about?

Accedi per commentare.

Categorie

Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange

Prodotti


Release

R2006a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by