Inconsistencies in functions called by incremental regression kernel

1 visualizzazione (ultimi 30 giorni)
Regarding classreg.learning.rkeutils.featureMapper, which is called by incremental learning kernel. There are inconsistencies I am unable to resolve:
1. In the help of the file you state
NU = [nu_1 nu_2 ... nu_(n/d)]
nu_i = diag(S(:,i))*H*diag(G(:,i))*PM*H*diag(B(:,i)) ./ (sigma*sqrt(d))
This means the maximum i is n/d, which can't be because S,G,B have dimensions d x n/2/d...which means i can't be greater than n/2/d
2. You state that Z = [cos(X*NU) sin(X*NU)]
this means if Z dimensions 1*ncols, for example,
Z(1,1:ncols/2).^2+Z(1,ncols/2+1:ncols).^2=ones(1,ncols/2)
(sum of sin squared and cos squared)
If there is some scaling, the one would be replaced by the scaling factor. However, this isn't the case when you use map function to get Z. mapfwht gives different result that satisfies sin^2+cos^2=1 rule, but this isn't the one used by default.
Finally, a special request: please, in future matlab versions, in the returned model by updatemetricsandfit, return the mapping, beta and bias of SVM regression plane as explicit public properties. It's very important for us as researchers.

Risposte (1)

Abhaya
Abhaya il 30 Dic 2024
Hi Yasmine,
The map function in classreg.learning.rkeutils.featureMapper generates a matrix 'Z', which contains 'n' random basis functions for the input matrix 'X'. The matrix 'Z' has dimensions of d * n, where d is the number of columns in X and n is the number of basis functions.
The process generates the NU matrix, which consists of n/2/d blocks, each of size d * d, resulting in a matrix of size d * n / 2. The Z matrix is created by vertically concatenating the cosine and sine projections of NU, which leads to the final d * n matrix.
In the case of the ‘fastfood’ approach, the map function approximates the kernel computation. As a result, multiple scaling factors influence the sum of , leading to a range of values rather than a single value of 1. In contrast, the ‘kitchensinks’ approach uses a single scaling factor due to the constraints associated with its input parameters.
For a better understanding, please refer to the following code:
X = rand(16,16);
sigma = 40;
FM1 = classreg.learning.rkeutils.featureMapper(size(X,2), 1024, 'kitchensinks');
Z1 = map(FM1, X, sigma);
n=size(Z1,2);
s1 = Z1(:,1:n/2).^2+Z1(:,n/2+1:n).^2;
disp(min(s1(:)));
0.0020
disp(max(s1(:)));
0.0020
FM2 = classreg.learning.rkeutils.featureMapper(size(X,2), 1024, 'fastfood');
Z2 = map(FM2, X, sigma);
n2=size(Z2,2);
s2 = Z2(:,1:n/2).^2+Z2(:,n/2+1:n).^2;
disp(min(s2(:)));
1.1292e-09
disp(max(s2(:)));
0.0039
This will show that the scaling factor in Z1 for the ‘kitchensinks’ approach is a single value (0.2), while the scaling factors in the ‘fastfood’ approach range from 1.1292e-09 to 0.0039.
For more detailed information, you can refer to the MATLAB documentation for the featureMapper function by running the following command:
doc classreg.learning.rkeutils.featureMapper
I hope this helps clarify your query.

Prodotti


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by