Design radial basis network
takes two of these arguments:
net = newrb(
Qtarget class vectors
goal— Mean squared error goal
spread— Spread of radial basis functions
MN— Maximum number of neurons
DF— Number of neurons to add between displays
Radial basis networks can be used to approximate functions.
adds neurons to the hidden layer of a radial basis network until it meets the specified mean
squared error goal.
spread is, the smoother the function approximation. Too
large a spread means a lot of neurons are required to fit a fast-changing function. Too
small a spread means many neurons are required to fit a smooth function, and the network
might not generalize well. Call
newrb with different spreads to find the
best value for a given problem.
Design a Radial Basis Network
This example shows how to design a radial basis network.
Design a radial basis network with inputs
P and targets
P = [1 2 3]; T = [2.0 4.1 5.9]; net = newrb(P,T);
Simulate the network for a new input.
P = 1.5; Y = sim(net,P)
P — Input matrix
Input vectors, specified as an
T — Target class matrix
Target class vectors, specified as an
goal — Error goal
0.0 (default) | scalar
Mean squared error goal, specified as a scalar.
spread — Spread of basis functions
1 (default) | scalar
Spread of radial basis functions, specified as a scalar.
MN — Neurons maximum number
Q (default) | scalar
Maximum number of neurons, specified as a scalar.
DF — Neurons between displays
25 (default) | scalar
Number of neurons to add between displays, specified as a scalar.
net — Radial basis network
New radial basis network, returned as a network object
newrb creates a two-layer network. The first layer has
radbas neurons, and calculates its weighted inputs with
dist and its net input with
netprod. The second layer
purelin neurons, and calculates its weighted input with
dotprod and its net inputs with
netsum. Both layers
radbas layer has no neurons. The following steps are
repeated until the network’s mean squared error falls below
The network is simulated.
The input vector with the greatest error is found.
radbasneuron is added with weights equal to that vector.
purelinlayer weights are redesigned to minimize error.
Introduced before R2006a