T5_(language_model)

T5 (language model)

T5 (language model)

Series of large language models


T5 (Text-to-Text Transfer Transformer) is a series of large language models developed by Google AI. Introduced in 2019,[1] T5 models are trained on a massive dataset of text and code using a text-to-text framework. The T5 models are capable of performing the text-based tasks that they were pretrained for. They can also be finetuned to perform other tasks.They have been employed in various applications, including chatbots, machine translation systems, text summarization tools, code generation, and robotics.

Quick Facts Original author(s), Initial release ...

Like the original Transformer model,[2] T5 models are encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text.

It was updated by T5X in 2022 to use JAX.[3] In 2024, T5X was updated to Pile-T5 by training the same architecture on an improved dataset (The Pile).[4]

Training

T5 models are pre-trained on the Colossal Clean Crawled Corpus (C4), containing text and code scraped from the internet. This pre-training process enables the models to learn general language understanding and generation abilities. T5 models can then be fine-tuned on specific downstream tasks, adapting their knowledge to perform well in various applications.

The T5 models were pretrained on many tasks, all in the format of <input text> -> <output text>.

Some examples are:

  • restoring corrupted text: Thank you <X> me to your party <Y> week. -> <X> for inviting <Y> last <Z> where the <Z> means "end of output".
  • translation: translate English to German: That is good. -> Das ist gut..
  • judging the grammatical acceptability of a sentence (CoLA sentence): The course is jumping well. -> not acceptable .

Architecture

The T5 series encompasses several models with varying sizes and capabilities. These models are often distinguished by their parameter count, which indicates the complexity and potential capacity of the model. The original paper[1] reported the following 5 models:

More information , ...

In the above table,

  • # layers: Number of layers in the encoder; also, number of layers in the decoder. They always have the same number of layers.
  • # heads: Number of attention heads in each attention block.
  • : Dimension of the embedding vectors.
  • : Dimension of the feedforward network within each encoder and decoder layer.
  • : Dimension of the key and value vectors used in the self-attention mechanism.

References

  1. Raffel, Colin; Shazeer, Noam; Roberts, Adam; Lee, Katherine; Narang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei; Liu, Peter J. (2020). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". Journal of Machine Learning Research. 21 (140): 1–67. ISSN 1533-7928.
  2. Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention is All you Need". Advances in Neural Information Processing Systems. 30. Curran Associates, Inc.
  3. Roberts, Adam; Chung, Hyung Won; Mishra, Gaurav; Levskaya, Anselm; Bradbury, James; Andor, Daniel; Narang, Sharan; Lester, Brian; Gaffney, Colin; Mohiuddin, Afroz; Hawthorne, Curtis; Lewkowycz, Aitor; Salcianu, Alex; Zee, Marc van; Austin, Jacob (2023). "Scaling Up Models and Data with t5x and seqio". Journal of Machine Learning Research. 24 (377): 1–8. ISSN 1533-7928.
  4. Sutawika, Lintang; Komatsuzaki, Aran; Raffel, Colin (2024-04-15). "Pile-T5". EleutherAI Blog. Retrieved 2024-05-05.



Share this article:

This article uses material from the Wikipedia article T5_(language_model), and is written by contributors. Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.