OpenAI Status – Unexpected responses from ChatGPT

Come ampiamente descritto sui media, ieri ChatGPT ha iniziato a buttare fuori sequenze di termini a casaccio.

La pagina di status di OpenAI è riportata sotto

Per una volta non leggiamo che ha “battuto la testa digitale” o “avuto la febbre alta” o “preso un colpo di sole”.

Ma leggiamo da OpenAI che

Gli LLM generano risposte campionando casualmente parole basate in parte su probabilità. Il loro “linguaggio” è costituito da numeri che corrispondono a token.

Molto aderente a una definizione di “Pappagalli stocastici”

Source: OpenAI

Unexpected responses from ChatGPT

Incident Report for OpenAI

On February 20, 2024, an optimization to the user experience introduced a bug with how the model processes language.

LLMs generate responses by randomly sampling words based in part on probabilities. Their “language” consists of numbers that map to tokens.

In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense. More technically, inference kernels produced incorrect results when used in certain GPU configurations.

Upon identifying the cause of this incident, we rolled out a fix and confirmed that the incident was resolved.

Posted Feb 21, 2024 – 17:03 PST

Source: OpenAI Status – Unexpected responses from ChatGPT

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *