optimal values for cell data
2 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
4 Commenti
Walter Roberson
il 10 Mar 2020
- Every student claims that their code is "sensitive" on the grounds that other students might read the posting and copy from them.
- There are no National Security considerations here.
- If there are Trade Secret matters here, then legally speaking you destroyed the "secret" as soon as you posted the material (Trade Secret case law is really strict on that point. Like if a piece of paper with a Trade Secret blows out of your hand and someone finds it, then you have just lost Trade Secret status.)
- You would have a difficult time convincing us that you are working on a Patent: people working on Patents know to hire consultants with Non-Disclosure Agreements
When I look at your previous postings, the most generous reading I can come up with is that you must might be working on a thesis. For thesis, the important part is that the ideas are yours; it is permitted to seek assistance with implementation .
Risposta accettata
Nipun Katyal
il 5 Mar 2020
Yeah there is some problem while handling the cell
Here is the correct way to handle it
% Make some data
Daten = rand(100, 3);
Daten(:,3) = Daten(:,1) + Daten(:,2) + .1*randn(100, 1); % Minimum asymptotic error is .1
[m,n] = size(Daten) ;
% Split into train and test
P = 0.7 ;
Training = Daten(1:round(P*m),:) ;
Testing = Daten(round(P*m)+1:end,:);
XTr = Training(:,1:n-1);
YTr = Training(:,n);
XTe = Testing(:,1:n-1);
YTe = Testing(:,n);
XTrain=num2cell(XTr(:,1));
YTrain=num2cell(YTr(:,1));
XTest=num2cell(XTe);
YTest=num2cell(YTe);
% Define a train/validation split to use inside the objective function
cv = cvpartition(numel(YTrain), 'Holdout', 1/3);
% Define hyperparameters to optimize
vars = [optimizableVariable('hiddenLayerSize', [1,20], 'Type', 'integer');
optimizableVariable('lr', [1e-3 1], 'Transform', 'log')];
% Optimize
minfn = @(T)kfoldLoss(XTrain', YTrain', cv, T.hiddenLayerSize, T.lr);
results = bayesopt(minfn, vars,'IsObjectiveDeterministic', false,...
'AcquisitionFunctionName', 'expected-improvement-plus');
T = bestPoint(results);
function rmse = kfoldLoss(x, y, cv, numHid, lr)
% Train net.
net = feedforwardnet(numHid, 'traingd');
net.trainParam.lr = lr;
net = train(net, x(:,cv.training), y(:,cv.training));
% Evaluate on validation set and compute rmse
ypred = net(x(:, cv.test));
n = size(ypred);
pw = 2*ones(n);
pw = num2cell(pw);
cMinus = cellfun(@minus, ypred, y(cv.test), 'UniformOutput', false);
cSquare = cellfun(@power, cMinus, pw, 'UniformOutput', false);
cSquareVect = cell2mat(cSquare);
cMean = mean(cSquareVect);
rmse = sqrt(cMean);
%rmse = sqrt(mean((ypred - y(cv.test)).^2));
end
4 Commenti
Nipun Katyal
il 9 Mar 2020
This should do
% Make some data
Daten = rand(100, 3);
Daten(:,3) = Daten(:,1) + Daten(:,2) + .1*randn(100, 1); % Minimum asymptotic error is .1
[m,n] = size(Daten) ;
% Split into train and test
P = 0.7 ;
Training = Daten(1:round(P*m),:) ;
Testing = Daten(round(P*m)+1:end,:);
XTr = Training(:,1:n-1);
YTr = Training(:,n);
XTe = Testing(:,1:n-1);
YTe = Testing(:,n);
XTrain=num2cell(XTr(:,1));
YTrain=num2cell(YTr(:,1));
XTest=num2cell(XTe);
YTest=num2cell(YTe);
% Define a train/validation split to use inside the objective function
cv = cvpartition(numel(YTrain), 'Holdout', 1/3);
% Define hyperparameters to optimize
vars = [optimizableVariable('hiddenLayerSize', [1,20], 'Type', 'integer');
optimizableVariable('lr', [1e-3 1e-1], 'Transform', 'log')];
% Optimize
minfn = @(T)kfoldLoss(XTrain', YTrain', cv, T.hiddenLayerSize, T.lr);
results = bayesopt(minfn, vars,'IsObjectiveDeterministic', false,...
'AcquisitionFunctionName', 'expected-improvement-plus');
T = bestPoint(results);
% Train final model on full training set using the best hyperparameters
net = layrecnet(1:2,T.hiddenLayerSize, 'traingd');
net.trainParam.lr = T.lr;
net = train(net, XTrain', YTrain');
% Evaluate on test set and compute final rmse
% ypred = net(XTest');
% finalrmse = sqrt(mean((ypred - YTest').^2))
% Evaluate on validation set and compute rmse
ypred = net(XTest(:,1)');
n = size(ypred);
pw = 2*ones(n);
pw = num2cell(pw);
cMinus = cellfun(@minus, ypred, YTest', 'UniformOutput', false);
cSquare = cellfun(@power, cMinus, pw, 'UniformOutput', false);
cSquareVect = cell2mat(cSquare);
cMean = mean(cSquareVect);
Rmse = sqrt(cMean)
function rmse = kfoldLoss(x, y, cv, numHid, lr)
% Train net.
net = feedforwardnet(numHid, 'traingd');
net.trainParam.lr = lr;
net = train(net, x(:,cv.training), y(:,cv.training));
% Evaluate on validation set and compute rmse
ypred = net(x(:, cv.test));
n = size(ypred);
pw = 2*ones(n);
pw = num2cell(pw);
cMinus = cellfun(@minus, ypred, y(cv.test), 'UniformOutput', false);
cSquare = cellfun(@power, cMinus, pw, 'UniformOutput', false);
cSquareVect = cell2mat(cSquare);
cMean = mean(cSquareVect);
rmse = sqrt(cMean);
%rmse = sqrt(mean((ypred - y(cv.test)).^2));
end
Più risposte (1)
Nipun Katyal
il 4 Mar 2020
Inorder to perform operations on cells use cellfun as mentioned below:
% Make some data
Daten = rand(100, 3);
Daten(:,3) = Daten(:,1) + Daten(:,2) + .1*randn(100, 1); % Minimum asymptotic error is .1
[m,n] = size(Daten) ;
% Split into train and test
P = 0.7 ;
Training = Daten(1:round(P*m),:) ;
Testing = Daten(round(P*m)+1:end,:);
XTr = Training(:,1:n-1);
YTr = Training(:,n);
XTe = Testing(:,1:n-1);
YTe = Testing(:,n);
XTrain=num2cell(XTr(:,1));
YTrain=num2cell(YTr(:,1));
XTest=num2cell(XTe);
YTest=num2cell(YTe);
% Define a train/validation split to use inside the objective function
cv = cvpartition(numel(YTrain), 'Holdout', 1/3);
% Define hyperparameters to optimize
vars = [optimizableVariable('hiddenLayerSize', [1,20], 'Type', 'integer');
optimizableVariable('lr', [1e-3 1], 'Transform', 'log')];
% Optimize
minfn = @(T)kfoldLoss(XTrain', YTrain', cv, T.hiddenLayerSize, T.lr);
results = bayesopt(minfn, vars,'IsObjectiveDeterministic', false,...
'AcquisitionFunctionName', 'expected-improvement-plus');
T = bestPoint(results);
function rmse = kfoldLoss(x, y, cv, numHid, lr)
% Train net.
net = feedforwardnet(numHid, 'traingd');
net.trainParam.lr = lr;
net = train(net, x(:,cv.training), y(:,cv.training));
% Evaluate on validation set and compute rmse
ypred = net(x(:, cv.test));
cMinus = cellfun(@minus, ypred, y(cv.test), 'UniformOutput', false);
cMean = cellfun(@mean, cMinus);
rmse = sqrt(cMean);
%rmse = sqrt(mean((ypred - y(cv.test)).^2));
end
1 Commento
Nipun Katyal
il 4 Mar 2020
In your case the rmse function will be:
function rmse = kfoldLoss(x, y, cv, numHid, lr)
% Train net.
net = feedforwardnet(numHid, 'traingd');
net.trainParam.lr = lr;
net = train(net, x(:,cv.training), y(:,cv.training));
% Evaluate on validation set and compute rmse
ypred = net(x(:, cv.test));
n = size(ypred);
pw = 2*ones(n);
pw = num2cell(pw);
cMinus = cellfun(@minus, ypred, y(cv.test), 'UniformOutput', false);
cSquare = cellfun(@power, cMinus, pw, 'UniformOutput', false);
cMean = cellfun(@mean, cSquare);
rmse = sqrt(cMean);
%rmse = sqrt(mean((ypred - y(cv.test)).^2));
end
Vedere anche
Categorie
Scopri di più su Sequence and Numeric Feature Data Workflows in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!