The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

Qualcuno avverta il business development di Micosoft!

Se questa cosa funziona davvero, e’ una rivoluzione!

Source: Microsoft Research e University of Chinese Academy of Sciences

Recent research, such as BitNet [WMD+23 ], is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant,namely BitNet b1.58, in which every single parameter (or weight) of the LLM isternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) TransformerLLM with the same model size and training tokens in terms of both perplexityand end-task performance, while being significantly more cost-effective in termsof latency, memory, throughput, and energy consumption. More profoundly, the1.58-bit LLM defines a new scaling law and recipe for training new generations ofLLMs that are both high-performance and cost-effective. Furthermore, it enablesa new computation paradigm and opens the door for designing specific hardwareoptimized for 1-bit LLMs.

Continua qui: 2402.17764.pdf

If you like this post, please consider sharing it.

Leave a Comment

Your email address will not be published. Required fields are marked *