3 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
This article explains the difference between the commonly used binary definition of a kilobyte as 1024 bytes and the decimal definition as 1000 bytes. It discusses the confusion this creates in computing, especially with storage manufacturers and operating systems using different conventions. The piece also introduces binary prefixes to clarify these terms.
If you do, here's more
A kilobyte is often considered to be 1024 bytes, but this is rooted in the binary system that computers use. In practice, this means that RAM and other memory components are usually designed in powers of two. While 1024 is close to 1000 (only a 2.4% difference), the discrepancy grows significantly with larger data sizes. For instance, a megabyte in binary is 1048576 bytes, compared to 1000000 bytes in decimal, leading to a relative difference of about 4.8%. This pattern continues, with terabytes showing a difference of around 10% and quettabytes exceeding 25%.
The confusion arises because different sectors of the tech industry apply these definitions inconsistently. While RAM manufacturers often stick to the binary definition (kibibytes, mebibytes, etc.), storage device makers typically use decimal measurements. The International Electrotechnical Commission has sought to clarify this by introducing binary prefixes like KiB for kibibyte to distinguish between the two systems. Understanding these differences is essential, particularly for non-technical users who may misinterpret storage capacities based on how they are presented.
Questions about this article
No questions yet.