Complete transformer model (Encoder + Decoder + Interconections)

29 visualizzazioni (ultimi 30 giorni)
Hello
I am wondering if there is already a Matlab keyboard warrior that has coded (on Matlab) a full transformer model:
  1. Inputs: Input Embedding + Positional Encoding
  2. Encoder: Multihead Attention + Add & Normalisation + Feedforward + Add & Normatisation
  3. Outputs: Output Embedding + Positional Encoding
  4. Decoder: Masked Multihead Attention + Add & Normalisation + Multihead Attention + Add & Normalisation + Feedforward + Add & Normatisation
  5. Final: Linear and Softmax.
Including all the interconnections between them.
Thank you
Will

Risposte (1)

Yash Sharma
Yash Sharma il 5 Ago 2024
Hi Will,
You can take a look at the following file exchange submission.
  2 Commenti
WIll Serrano
WIll Serrano il 7 Ago 2024
Hello Yash
Thank you for your answer.
I read that one, it is based on a pre-trained transformer and it does not directly represent the transformer components. As well it provides the same functionality as a normal LSTM for text classification.
It is acknowledged transformers with attention are somehow superior to Deep Learning based on LSTM, however, I have yet to prove it myself.
Thank you
Will
WIll Serrano
WIll Serrano il 5 Ott 2024
As it seems nobody has answered, I have cracked the code myself.

Accedi per commentare.

Categorie

Scopri di più su Develop Apps Using App Designer in Help Center e File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by