Is there any documentation on how to build a transformer encoder from scratch in matlab?
182 visualizzazioni (ultimi 30 giorni)
Mostra commenti meno recenti
I am building a transformer encoder, and I came accross the following exchange: https://www.mathworks.com/matlabcentral/fileexchange/107375-transformer-models
However, in the exchange there are examples on how to use a pretrained transformer model. I just need an example on how to build a model. Something to give a general idea so I can build on it. I have studied the basics of transformers but I am having some difficulty building the model from scratch.
Thank you in advance.
1 Commento
Shubham
il 8 Set 2023
Hi,
You can refer to this documentation:
This article is Tensorflow, but you can replicate this in MATLAB
Risposta accettata
Ben
il 18 Set 2023
The general structure of the intermediate encoder blocks is like:
selfAttentionLayer(numHeads,numKeyChannels) % self attention
additionLayer(2,Name="attention_add") % residual connection around attention
layerNormalizationLayer(Name="attention_norm") % layer norm
fullyConnectedLayer(feedforwardHiddenSize) % feedforward part 1
reluLayer % nonlinear activation
fullyConnectedLayer(attentionHiddenSize) % feedforward part 2
additionLayer(2,Name="feedforward_add") % residual connection around feedforward
layerNormalizationLayer() % layer norm
You would need to hook up the connections to the addition layers appropriately.
Typically you would have multiple copies of this encoder block in a transformer encoder.
You also typically need an embedding at the start of the model. For text data it's common to use wordEmbeddingLayer whereas image data you would use patchEmbeddingLayer.
Also the above encoder block makes no use of positional information, so if your training task requires positional information to be used, you would typically inject the position information via a positionEmbeddingLayer or sinusoidalPositionEncodingLayer.
Finally the last encoder block will typically feed into a model "head" to map the encoder output back to the dimensions of the training targets. Typically this can just be some simple fullyConnectedLayer-s.
Note that for both image and sequence input data the output of the encoder is still an image or sequence, so for image classification and sequence-to-one tasks you need some way to map that sequence of encoder ouptuts to a fixed-size representation. For this you could use indexing1dLayer or pooling layers like globalMaxPooling1dLayer.
Here's a demonstration of the general architecture for a toy task. Given a sequence where each we can specify a task . For example would have and , then . This is a toy problem that requires positional information to solve and can be easily implemented in code. You can train a transformer encoder to predict y from x as follows:
% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, ..., 10 the "vocabulary" size is 10.
vocabSize = 10;
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
indexing1dLayer %
fullyConnectedLayer(inputSize)]; % output head
net = dlnetwork(encoderLayers,Initialize=false);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
% create toy training data
% We will generate 10,000 sequences of length 10
% with values that are random integers 1-10
numObs = 10000;
seqLen = 10;
x = randi([1,10],[seqLen,numObs]);
% Loop over to create y(i) = x(x(1),i) + x(x(2),i)
y = zeros(numObs,1);
for i = 1:numObs
idx = x(1:2,i);
y(i) = sum(x(idx,i));
end
x = num2cell(x,1);
% specify training options and train
opts = trainingOptions("adam", ...
MaxEpochs = 200, ...
MiniBatchSize = numObs/10, ...
Plots="training-progress", ...
Shuffle="every-epoch", ...
InitialLearnRate=1e-2, ...
LearnRateDropFactor=0.9, ...
LearnRateDropPeriod=10, ...
LearnRateSchedule="piecewise");
net = trainnet(x,y,net,"mse",opts);
% test the network on a new input
x = randi([1,10],[seqLen,1]));
ypred = predict(net,x)
yact = x(x(1)) + x(x(2))
Obviously this is a toy task, but I think it demonstrates the parts of the standard transformer architecture. Two additional things you would likely need to deal with in real tasks is:
- For sequence data often the observations have different sequence lengths. For this you need to pad the data and pass padding masks to the selfAttentionLayer so that no attention is paid to padding elements.
- Often the encoder will be initially pre-trained on a self-supervised task, e.g. masked-language-modeling for natural language encoders.
Hope that helps.
10 Commenti
Ben
il 19 Dic 2024 alle 15:55
@haohaoxuexi1 - as an example, you could take the sequence-to-one classification task in this documentation example and swap out layers for the following:
layers = [
sequenceInputLayer(numChannels)
fullyConnectedLayer(numHiddenUnits,Name="embed")
positionEmbeddingLayer(numHiddenUnits,200)
additionLayer(2,Name="add")
selfAttentionLayer(1,8)
indexing1dLayer
layerNormalizationLayer
fullyConnectedLayer(numHiddenUnits)
geluLayer
fullyConnectedLayer(numClasses)
softmaxLayer];
layers = dlnetwork(layers,Initialize=false);
layers = connectLayers(layers,"embed","add/in2");
This seems to be able to perform the classification task in the example. This is a simplified transformer - there are no residual connections around the multi-head attention or multi-layer perceptron (MLP) parts of the layer. You can add those with additionLayer and connectLayers. This network demonstrates using selfAttentionLayer, positionEmbeddingLayer, indexing1dLayer. The positionEmbeddingLayer creates a representation of positional information via a learnt embedding that is then added to linear embedding of the data from the first fullyConnectedLayer. The selfAttentionLayer performs the multi-head self attention. The indexing1dLayer takes a sequence as input and returns just the first sequence element - in a sense this is "pooling" the sequence by just disregarding everything except the first sequence element, which is common in sequence-to-one transformer-encoders, since the first sequence element (and any other sequence element) can pay attention to all other sequence elements via the selfAttentionLayer. Other types of pooling are common too, such as global maximum and average pooling. Typically a transformer will additionally have residual connections around the self attention and MLP parts of the network, with additional layerNormalizationLayer instances, multiple heads in the selfAttentionLayer, and multiple instances of the transformer layer(s) in sequence. For sequence-to-sequence classification, you would remove the indexing1dLayer as you want the model to output a sequence of classes.
Ben
il 19 Dic 2024 alle 16:02
@Idir I believe you could leave an issue on the GitHub repo if you have a GitHub account, and I could reply there. That is quite an old project that I haven't looked at for some time, and I note in the notebook that I was following the code from https://github.com/acarapetis/curve-shortening-demo. I was using this example to get familiar with programming, in particular how numeric methods can be used to approximate solutions to PDEs, since curve shortening flow is the 1D case of some of the things I was studying at the time.
Più risposte (1)
Mehernaz Savai
il 6 Dic 2024 alle 19:50
In addition to Ben's suggestions, we have new articles that can be a good source for getting started with Transformers in MATLAB:
0 Commenti
Vedere anche
Categorie
Scopri di più su Deep Learning for Image Processing in Help Center e File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!