The debate about the energy greediness of large AI models is raging. Recently, an AI ethics researcher at Google was dismissed because she had pinpointed the upward spiral of exploding training data sets. The fact is that the numbers make one’s head swim. In 2018, the BERT model made the headlines by achieving best-in-class NLP performance with a training dataset of 3 billion words. Two years later, AI researchers were not working with billions of parameters anymore, but with hundreds of billions: in 2020, OpenAI presented GPT-3 -- acclaimed as the largest AI model ever built, with a data set of 500…
[Continue Reading]
Aucun commentaire:
Enregistrer un commentaire