How did the IEC decide to create kibibytes?How did a punchcard-based test-taking system work?How much did the 6502 and Z80 cost?How did the early UPC barcode readers work?Was Family BASIC for the NES/Famicom powerful enough to create full games and applications?How many units did the Victor 9000 sell?Why did F1 become the Help Key?How did PC boot games handle saving?How much RAM did the first version of Oregon Trail on the Apple II require?How many units did TI Invaders sell?Did Xerox really develop the first LAN?

Is it bad to suddenly introduce another element to your fantasy world a good ways into the story?

Do Goblin tokens count as Goblins?

Has there ever been a cold war other than between the U.S. and the U.S.S.R.?

What are some bad ways to subvert tropes?

Was I wrongfully denied boarding for having a Schengen visa issued from the second country on my itinerary?

Do the 26 richest billionaires own as much wealth as the poorest 3.8 billion people?

Will Jimmy fall off his platform?

How did Einstein know the speed of light was constant?

How frequently do Russian people still refer to others by their patronymic (отчество)?

Why do Martians have to wear space helmets?

What instances can be solved today by modern solvers (pure LP)?

Why did Super-VGA offer the 5:4 1280*1024 resolution?

How do I check that users don't write down their passwords?

What happens if the limit of 4 billion files was exceeded in an ext4 partition?

Why do most airliners have underwing engines, while business jets have rear-mounted engines?

Dualizable object in the category of locally presentable categories

Motorcyle Chain needs to be cleaned every time you lube it?

Isn't "Dave's protocol" good if only the database, and not the code, is leaked?

How did שְׁלֹמֹה (shlomo) become Solomon?

Taking advantage when the HR forgets to communicate the rules

How can a ban from entering the US be lifted?

How did the IEC decide to create kibibytes?

Why no parachutes in the Orion AA2 abort test?

Red and White Squares



How did the IEC decide to create kibibytes?


How did a punchcard-based test-taking system work?How much did the 6502 and Z80 cost?How did the early UPC barcode readers work?Was Family BASIC for the NES/Famicom powerful enough to create full games and applications?How many units did the Victor 9000 sell?Why did F1 become the Help Key?How did PC boot games handle saving?How much RAM did the first version of Oregon Trail on the Apple II require?How many units did TI Invaders sell?Did Xerox really develop the first LAN?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3















What was the decision making process that lead to the IEC to create "kibibytes", "mebibytes" and so forth?



To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers. Indeed, even now the 1024 byte kilobyte is commonly used when talking about memories of various kinds.



JEDEC had also standardized on the 1024 byte kilobyte, and it remains in widespread use by billions of JEDEC standards compliant devices.



So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes, and the creation of the rather awkward "kibibytes".










share|improve this question

















  • 1





    I'm sure there is much more to it, but from when I first saw it, I took the redefinition to be a marketing move - i.e., it takes fewer transistors, less magnetic media, etc. for 1,000 than 1,204 and for 1,000,000 than for 1,048,576 so the manufacturers jumped on the chance to make things seem bigger without extra cost. But I could be just a little skeptical...

    – manassehkatz
    9 hours ago











  • @manassehkatz HDD manufacturers certainly liked the ^10 version, but it always seemed the odd one out to me as other memory devices such as RAM, EEPROM, flash memory and the filesystems that often interacted with them all used powers of 2.

    – user
    9 hours ago











  • I don't have any inside info as to the reasoning, but I think that the growing discrepancy between 1000^N and 1024^N as N increased created an apparent need to coin terms that distinguished the forms for larger N (it's less than 0.3% for N=1, and 4% for N=2, but almost 10% for N=4), which in turn created a "why not" for the smaller forms. What makes this ironic is while many things are counted in multiples of 1024 bytes (which had an uppercase "K" prefix which could sensibly have been pronounced "kay"), the larger powers are used almost exclusively for identifying specific powers-of-two.

    – supercat
    8 hours ago












  • The IEC themselves provide an explanation on their web site, but reproducing it here would require written authorisation from them which I don’t have.

    – Stephen Kitt
    8 hours ago






  • 1





    There are/were two "standards" -- the actual SI standard that says kilo=1000 (etc) and the de facto standard of computer people that says kilo=1024 or 1000 depending on context and everyone is expected to know the right one in any context. I count myself in the latter camp. However, the kibifans do not agree with me.

    – another-dave
    7 hours ago

















3















What was the decision making process that lead to the IEC to create "kibibytes", "mebibytes" and so forth?



To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers. Indeed, even now the 1024 byte kilobyte is commonly used when talking about memories of various kinds.



JEDEC had also standardized on the 1024 byte kilobyte, and it remains in widespread use by billions of JEDEC standards compliant devices.



So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes, and the creation of the rather awkward "kibibytes".










share|improve this question

















  • 1





    I'm sure there is much more to it, but from when I first saw it, I took the redefinition to be a marketing move - i.e., it takes fewer transistors, less magnetic media, etc. for 1,000 than 1,204 and for 1,000,000 than for 1,048,576 so the manufacturers jumped on the chance to make things seem bigger without extra cost. But I could be just a little skeptical...

    – manassehkatz
    9 hours ago











  • @manassehkatz HDD manufacturers certainly liked the ^10 version, but it always seemed the odd one out to me as other memory devices such as RAM, EEPROM, flash memory and the filesystems that often interacted with them all used powers of 2.

    – user
    9 hours ago











  • I don't have any inside info as to the reasoning, but I think that the growing discrepancy between 1000^N and 1024^N as N increased created an apparent need to coin terms that distinguished the forms for larger N (it's less than 0.3% for N=1, and 4% for N=2, but almost 10% for N=4), which in turn created a "why not" for the smaller forms. What makes this ironic is while many things are counted in multiples of 1024 bytes (which had an uppercase "K" prefix which could sensibly have been pronounced "kay"), the larger powers are used almost exclusively for identifying specific powers-of-two.

    – supercat
    8 hours ago












  • The IEC themselves provide an explanation on their web site, but reproducing it here would require written authorisation from them which I don’t have.

    – Stephen Kitt
    8 hours ago






  • 1





    There are/were two "standards" -- the actual SI standard that says kilo=1000 (etc) and the de facto standard of computer people that says kilo=1024 or 1000 depending on context and everyone is expected to know the right one in any context. I count myself in the latter camp. However, the kibifans do not agree with me.

    – another-dave
    7 hours ago













3












3








3








What was the decision making process that lead to the IEC to create "kibibytes", "mebibytes" and so forth?



To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers. Indeed, even now the 1024 byte kilobyte is commonly used when talking about memories of various kinds.



JEDEC had also standardized on the 1024 byte kilobyte, and it remains in widespread use by billions of JEDEC standards compliant devices.



So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes, and the creation of the rather awkward "kibibytes".










share|improve this question














What was the decision making process that lead to the IEC to create "kibibytes", "mebibytes" and so forth?



To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers. Indeed, even now the 1024 byte kilobyte is commonly used when talking about memories of various kinds.



JEDEC had also standardized on the 1024 byte kilobyte, and it remains in widespread use by billions of JEDEC standards compliant devices.



So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes, and the creation of the rather awkward "kibibytes".







history






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 9 hours ago









useruser

6,7131 gold badge11 silver badges29 bronze badges




6,7131 gold badge11 silver badges29 bronze badges







  • 1





    I'm sure there is much more to it, but from when I first saw it, I took the redefinition to be a marketing move - i.e., it takes fewer transistors, less magnetic media, etc. for 1,000 than 1,204 and for 1,000,000 than for 1,048,576 so the manufacturers jumped on the chance to make things seem bigger without extra cost. But I could be just a little skeptical...

    – manassehkatz
    9 hours ago











  • @manassehkatz HDD manufacturers certainly liked the ^10 version, but it always seemed the odd one out to me as other memory devices such as RAM, EEPROM, flash memory and the filesystems that often interacted with them all used powers of 2.

    – user
    9 hours ago











  • I don't have any inside info as to the reasoning, but I think that the growing discrepancy between 1000^N and 1024^N as N increased created an apparent need to coin terms that distinguished the forms for larger N (it's less than 0.3% for N=1, and 4% for N=2, but almost 10% for N=4), which in turn created a "why not" for the smaller forms. What makes this ironic is while many things are counted in multiples of 1024 bytes (which had an uppercase "K" prefix which could sensibly have been pronounced "kay"), the larger powers are used almost exclusively for identifying specific powers-of-two.

    – supercat
    8 hours ago












  • The IEC themselves provide an explanation on their web site, but reproducing it here would require written authorisation from them which I don’t have.

    – Stephen Kitt
    8 hours ago






  • 1





    There are/were two "standards" -- the actual SI standard that says kilo=1000 (etc) and the de facto standard of computer people that says kilo=1024 or 1000 depending on context and everyone is expected to know the right one in any context. I count myself in the latter camp. However, the kibifans do not agree with me.

    – another-dave
    7 hours ago












  • 1





    I'm sure there is much more to it, but from when I first saw it, I took the redefinition to be a marketing move - i.e., it takes fewer transistors, less magnetic media, etc. for 1,000 than 1,204 and for 1,000,000 than for 1,048,576 so the manufacturers jumped on the chance to make things seem bigger without extra cost. But I could be just a little skeptical...

    – manassehkatz
    9 hours ago











  • @manassehkatz HDD manufacturers certainly liked the ^10 version, but it always seemed the odd one out to me as other memory devices such as RAM, EEPROM, flash memory and the filesystems that often interacted with them all used powers of 2.

    – user
    9 hours ago











  • I don't have any inside info as to the reasoning, but I think that the growing discrepancy between 1000^N and 1024^N as N increased created an apparent need to coin terms that distinguished the forms for larger N (it's less than 0.3% for N=1, and 4% for N=2, but almost 10% for N=4), which in turn created a "why not" for the smaller forms. What makes this ironic is while many things are counted in multiples of 1024 bytes (which had an uppercase "K" prefix which could sensibly have been pronounced "kay"), the larger powers are used almost exclusively for identifying specific powers-of-two.

    – supercat
    8 hours ago












  • The IEC themselves provide an explanation on their web site, but reproducing it here would require written authorisation from them which I don’t have.

    – Stephen Kitt
    8 hours ago






  • 1





    There are/were two "standards" -- the actual SI standard that says kilo=1000 (etc) and the de facto standard of computer people that says kilo=1024 or 1000 depending on context and everyone is expected to know the right one in any context. I count myself in the latter camp. However, the kibifans do not agree with me.

    – another-dave
    7 hours ago







1




1





I'm sure there is much more to it, but from when I first saw it, I took the redefinition to be a marketing move - i.e., it takes fewer transistors, less magnetic media, etc. for 1,000 than 1,204 and for 1,000,000 than for 1,048,576 so the manufacturers jumped on the chance to make things seem bigger without extra cost. But I could be just a little skeptical...

– manassehkatz
9 hours ago





I'm sure there is much more to it, but from when I first saw it, I took the redefinition to be a marketing move - i.e., it takes fewer transistors, less magnetic media, etc. for 1,000 than 1,204 and for 1,000,000 than for 1,048,576 so the manufacturers jumped on the chance to make things seem bigger without extra cost. But I could be just a little skeptical...

– manassehkatz
9 hours ago













@manassehkatz HDD manufacturers certainly liked the ^10 version, but it always seemed the odd one out to me as other memory devices such as RAM, EEPROM, flash memory and the filesystems that often interacted with them all used powers of 2.

– user
9 hours ago





@manassehkatz HDD manufacturers certainly liked the ^10 version, but it always seemed the odd one out to me as other memory devices such as RAM, EEPROM, flash memory and the filesystems that often interacted with them all used powers of 2.

– user
9 hours ago













I don't have any inside info as to the reasoning, but I think that the growing discrepancy between 1000^N and 1024^N as N increased created an apparent need to coin terms that distinguished the forms for larger N (it's less than 0.3% for N=1, and 4% for N=2, but almost 10% for N=4), which in turn created a "why not" for the smaller forms. What makes this ironic is while many things are counted in multiples of 1024 bytes (which had an uppercase "K" prefix which could sensibly have been pronounced "kay"), the larger powers are used almost exclusively for identifying specific powers-of-two.

– supercat
8 hours ago






I don't have any inside info as to the reasoning, but I think that the growing discrepancy between 1000^N and 1024^N as N increased created an apparent need to coin terms that distinguished the forms for larger N (it's less than 0.3% for N=1, and 4% for N=2, but almost 10% for N=4), which in turn created a "why not" for the smaller forms. What makes this ironic is while many things are counted in multiples of 1024 bytes (which had an uppercase "K" prefix which could sensibly have been pronounced "kay"), the larger powers are used almost exclusively for identifying specific powers-of-two.

– supercat
8 hours ago














The IEC themselves provide an explanation on their web site, but reproducing it here would require written authorisation from them which I don’t have.

– Stephen Kitt
8 hours ago





The IEC themselves provide an explanation on their web site, but reproducing it here would require written authorisation from them which I don’t have.

– Stephen Kitt
8 hours ago




1




1





There are/were two "standards" -- the actual SI standard that says kilo=1000 (etc) and the de facto standard of computer people that says kilo=1024 or 1000 depending on context and everyone is expected to know the right one in any context. I count myself in the latter camp. However, the kibifans do not agree with me.

– another-dave
7 hours ago





There are/were two "standards" -- the actual SI standard that says kilo=1000 (etc) and the de facto standard of computer people that says kilo=1024 or 1000 depending on context and everyone is expected to know the right one in any context. I count myself in the latter camp. However, the kibifans do not agree with me.

– another-dave
7 hours ago










3 Answers
3






active

oldest

votes


















8















To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers




They are not the only people though. The term got confusing mostly because of disk manufacturers who preferred base 10 because you disk capacity was a larger number. Perhaps the most egregious nonsense comes from the high density floppy disk which is described as having 1.44 Megabytes where a Megabyte is defined as 1000 kilobytes and a kilobyte is defined as 1024 bytes. i.e. 1.44 x 1000 x 1024 which is plainly ridiculous.



Also "kilo", "tera" and "mega" are standard SI terms meaning various powers of 1,000 (in base 10). It should, therefore, be a good idea to have different names for the base 2 versions.



Disclaimer: I personally ignore the base 2 names and abbreviations because they are stupid.






share|improve this answer






























    3














    While JDEC memory standards were using 1024 Byte Kilobytes at the time, many magnetic storage devices were using 1000 Byte Kilobyte size for several reasons.



    To explain where the 1024 Byte value comes from, it is a nice convenient 2^10 value.



    However, this use of power of twos only applied to RAM and ROM. Magnetic media did not use power of two dimensions, and thus when marketing saw that they had to chose between two numbers, the irrelevant 1024 based standard of memory or 1000 based traditional metric standard. They chose the traditional metric standard, since neither produced a nice round number and the 1000 based number created a large number.



    This of course caused untold issues, and neither side was really wrong. Disk drives were not JDEC memory devices. IEC, thus decided to settle the matter on disk drives by having two different units for magnetic data storage. Kibibytes and KiloBytes.






    share|improve this answer








    New contributor



    Robert Wm Ruedisueli is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.


























      2














      These prefixes are part of the International System of Units - though, they have already been used before, since introduced with the metric system in 1795.



      When Computers (and memory) became a thing, engineers used these prefixes in a similar Manner. After all,




      So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes,




      There was no redefinition, as 'kilobytes' isn't a unit of its own. It's 'kilo' as prefix with the fixed definition of 1000 and 'byte' as, well, a byte. So Kilobyte always meant 1000 bytes.



      These prefixes are part of the International System of Units (SI) - though, they have already been used before, since introduced with the metric system in 1795. Within SI there are no different units for the same thing (like inch, feet, yard, furlongs, chains and miles for length) but always only own (meter for length, Ampere for current and so on). To use them in different circumstances they are prefixed according to powers of 10. A very convenient system.



      And it were sloppy engineers that used this convenience to describe certain binary values with somewhat close values - as 2^10 is 1024 and thus close to 1000. These were never official units in any way, just a kludge to get along. Standard documents always used power of 10 pefixes - which leads btw to the effect of serial transmissions always being decimal - a 9.6 kbit line transfers 9600 bit per second, not 9830 :)



      When computers came out of the closet in the 80s, people stated to recognize that these kilobytes aren't really a kilo bytes but different, so it was common to capitalize the K, as the SI prefix uses a lower case k. Nice idea - as long as memories stayed in the range of a few dozend to a few hundred KB, but when 1024 KB where reached, it broke, as the SI prefix M is already uppercase.



      In the late 1980s/early 1990s it became obvious that there is a need for a clear meaning, so an international standard was proposed - and accepted in the late 1990s.



      Soit'S now more than 20 years later ... heck, not even the Englishdid complain that long about the loss of their non decimal currency.




      and the creation of the rather awkward "kibibytes".




      The binary prefixes are anything but awkward. They offer an easy, convenient and well defined way to operate with (almost) the same prefixes than for any other unit, but now making it binary, which does make a lot of sense for computing, doesn't it?



      Just make it always uppercase and add a lowercase 'i' and it's binary. No more confusion.






      share|improve this answer

























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "648"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f11563%2fhow-did-the-iec-decide-to-create-kibibytes%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        8















        To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers




        They are not the only people though. The term got confusing mostly because of disk manufacturers who preferred base 10 because you disk capacity was a larger number. Perhaps the most egregious nonsense comes from the high density floppy disk which is described as having 1.44 Megabytes where a Megabyte is defined as 1000 kilobytes and a kilobyte is defined as 1024 bytes. i.e. 1.44 x 1000 x 1024 which is plainly ridiculous.



        Also "kilo", "tera" and "mega" are standard SI terms meaning various powers of 1,000 (in base 10). It should, therefore, be a good idea to have different names for the base 2 versions.



        Disclaimer: I personally ignore the base 2 names and abbreviations because they are stupid.






        share|improve this answer



























          8















          To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers




          They are not the only people though. The term got confusing mostly because of disk manufacturers who preferred base 10 because you disk capacity was a larger number. Perhaps the most egregious nonsense comes from the high density floppy disk which is described as having 1.44 Megabytes where a Megabyte is defined as 1000 kilobytes and a kilobyte is defined as 1024 bytes. i.e. 1.44 x 1000 x 1024 which is plainly ridiculous.



          Also "kilo", "tera" and "mega" are standard SI terms meaning various powers of 1,000 (in base 10). It should, therefore, be a good idea to have different names for the base 2 versions.



          Disclaimer: I personally ignore the base 2 names and abbreviations because they are stupid.






          share|improve this answer

























            8












            8








            8








            To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers




            They are not the only people though. The term got confusing mostly because of disk manufacturers who preferred base 10 because you disk capacity was a larger number. Perhaps the most egregious nonsense comes from the high density floppy disk which is described as having 1.44 Megabytes where a Megabyte is defined as 1000 kilobytes and a kilobyte is defined as 1024 bytes. i.e. 1.44 x 1000 x 1024 which is plainly ridiculous.



            Also "kilo", "tera" and "mega" are standard SI terms meaning various powers of 1,000 (in base 10). It should, therefore, be a good idea to have different names for the base 2 versions.



            Disclaimer: I personally ignore the base 2 names and abbreviations because they are stupid.






            share|improve this answer














            To me it seems like kilobytes were well established as 1024 bytes, both by programmers using them and by electronic engineers




            They are not the only people though. The term got confusing mostly because of disk manufacturers who preferred base 10 because you disk capacity was a larger number. Perhaps the most egregious nonsense comes from the high density floppy disk which is described as having 1.44 Megabytes where a Megabyte is defined as 1000 kilobytes and a kilobyte is defined as 1024 bytes. i.e. 1.44 x 1000 x 1024 which is plainly ridiculous.



            Also "kilo", "tera" and "mega" are standard SI terms meaning various powers of 1,000 (in base 10). It should, therefore, be a good idea to have different names for the base 2 versions.



            Disclaimer: I personally ignore the base 2 names and abbreviations because they are stupid.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered 8 hours ago









            JeremyPJeremyP

            6,0951 gold badge22 silver badges35 bronze badges




            6,0951 gold badge22 silver badges35 bronze badges























                3














                While JDEC memory standards were using 1024 Byte Kilobytes at the time, many magnetic storage devices were using 1000 Byte Kilobyte size for several reasons.



                To explain where the 1024 Byte value comes from, it is a nice convenient 2^10 value.



                However, this use of power of twos only applied to RAM and ROM. Magnetic media did not use power of two dimensions, and thus when marketing saw that they had to chose between two numbers, the irrelevant 1024 based standard of memory or 1000 based traditional metric standard. They chose the traditional metric standard, since neither produced a nice round number and the 1000 based number created a large number.



                This of course caused untold issues, and neither side was really wrong. Disk drives were not JDEC memory devices. IEC, thus decided to settle the matter on disk drives by having two different units for magnetic data storage. Kibibytes and KiloBytes.






                share|improve this answer








                New contributor



                Robert Wm Ruedisueli is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.























                  3














                  While JDEC memory standards were using 1024 Byte Kilobytes at the time, many magnetic storage devices were using 1000 Byte Kilobyte size for several reasons.



                  To explain where the 1024 Byte value comes from, it is a nice convenient 2^10 value.



                  However, this use of power of twos only applied to RAM and ROM. Magnetic media did not use power of two dimensions, and thus when marketing saw that they had to chose between two numbers, the irrelevant 1024 based standard of memory or 1000 based traditional metric standard. They chose the traditional metric standard, since neither produced a nice round number and the 1000 based number created a large number.



                  This of course caused untold issues, and neither side was really wrong. Disk drives were not JDEC memory devices. IEC, thus decided to settle the matter on disk drives by having two different units for magnetic data storage. Kibibytes and KiloBytes.






                  share|improve this answer








                  New contributor



                  Robert Wm Ruedisueli is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





















                    3












                    3








                    3







                    While JDEC memory standards were using 1024 Byte Kilobytes at the time, many magnetic storage devices were using 1000 Byte Kilobyte size for several reasons.



                    To explain where the 1024 Byte value comes from, it is a nice convenient 2^10 value.



                    However, this use of power of twos only applied to RAM and ROM. Magnetic media did not use power of two dimensions, and thus when marketing saw that they had to chose between two numbers, the irrelevant 1024 based standard of memory or 1000 based traditional metric standard. They chose the traditional metric standard, since neither produced a nice round number and the 1000 based number created a large number.



                    This of course caused untold issues, and neither side was really wrong. Disk drives were not JDEC memory devices. IEC, thus decided to settle the matter on disk drives by having two different units for magnetic data storage. Kibibytes and KiloBytes.






                    share|improve this answer








                    New contributor



                    Robert Wm Ruedisueli is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                    Check out our Code of Conduct.









                    While JDEC memory standards were using 1024 Byte Kilobytes at the time, many magnetic storage devices were using 1000 Byte Kilobyte size for several reasons.



                    To explain where the 1024 Byte value comes from, it is a nice convenient 2^10 value.



                    However, this use of power of twos only applied to RAM and ROM. Magnetic media did not use power of two dimensions, and thus when marketing saw that they had to chose between two numbers, the irrelevant 1024 based standard of memory or 1000 based traditional metric standard. They chose the traditional metric standard, since neither produced a nice round number and the 1000 based number created a large number.



                    This of course caused untold issues, and neither side was really wrong. Disk drives were not JDEC memory devices. IEC, thus decided to settle the matter on disk drives by having two different units for magnetic data storage. Kibibytes and KiloBytes.







                    share|improve this answer








                    New contributor



                    Robert Wm Ruedisueli is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                    Check out our Code of Conduct.








                    share|improve this answer



                    share|improve this answer






                    New contributor



                    Robert Wm Ruedisueli is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                    Check out our Code of Conduct.








                    answered 6 hours ago









                    Robert Wm RuedisueliRobert Wm Ruedisueli

                    311 bronze badge




                    311 bronze badge




                    New contributor



                    Robert Wm Ruedisueli is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                    Check out our Code of Conduct.




                    New contributor




                    Robert Wm Ruedisueli is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                    Check out our Code of Conduct.























                        2














                        These prefixes are part of the International System of Units - though, they have already been used before, since introduced with the metric system in 1795.



                        When Computers (and memory) became a thing, engineers used these prefixes in a similar Manner. After all,




                        So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes,




                        There was no redefinition, as 'kilobytes' isn't a unit of its own. It's 'kilo' as prefix with the fixed definition of 1000 and 'byte' as, well, a byte. So Kilobyte always meant 1000 bytes.



                        These prefixes are part of the International System of Units (SI) - though, they have already been used before, since introduced with the metric system in 1795. Within SI there are no different units for the same thing (like inch, feet, yard, furlongs, chains and miles for length) but always only own (meter for length, Ampere for current and so on). To use them in different circumstances they are prefixed according to powers of 10. A very convenient system.



                        And it were sloppy engineers that used this convenience to describe certain binary values with somewhat close values - as 2^10 is 1024 and thus close to 1000. These were never official units in any way, just a kludge to get along. Standard documents always used power of 10 pefixes - which leads btw to the effect of serial transmissions always being decimal - a 9.6 kbit line transfers 9600 bit per second, not 9830 :)



                        When computers came out of the closet in the 80s, people stated to recognize that these kilobytes aren't really a kilo bytes but different, so it was common to capitalize the K, as the SI prefix uses a lower case k. Nice idea - as long as memories stayed in the range of a few dozend to a few hundred KB, but when 1024 KB where reached, it broke, as the SI prefix M is already uppercase.



                        In the late 1980s/early 1990s it became obvious that there is a need for a clear meaning, so an international standard was proposed - and accepted in the late 1990s.



                        Soit'S now more than 20 years later ... heck, not even the Englishdid complain that long about the loss of their non decimal currency.




                        and the creation of the rather awkward "kibibytes".




                        The binary prefixes are anything but awkward. They offer an easy, convenient and well defined way to operate with (almost) the same prefixes than for any other unit, but now making it binary, which does make a lot of sense for computing, doesn't it?



                        Just make it always uppercase and add a lowercase 'i' and it's binary. No more confusion.






                        share|improve this answer



























                          2














                          These prefixes are part of the International System of Units - though, they have already been used before, since introduced with the metric system in 1795.



                          When Computers (and memory) became a thing, engineers used these prefixes in a similar Manner. After all,




                          So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes,




                          There was no redefinition, as 'kilobytes' isn't a unit of its own. It's 'kilo' as prefix with the fixed definition of 1000 and 'byte' as, well, a byte. So Kilobyte always meant 1000 bytes.



                          These prefixes are part of the International System of Units (SI) - though, they have already been used before, since introduced with the metric system in 1795. Within SI there are no different units for the same thing (like inch, feet, yard, furlongs, chains and miles for length) but always only own (meter for length, Ampere for current and so on). To use them in different circumstances they are prefixed according to powers of 10. A very convenient system.



                          And it were sloppy engineers that used this convenience to describe certain binary values with somewhat close values - as 2^10 is 1024 and thus close to 1000. These were never official units in any way, just a kludge to get along. Standard documents always used power of 10 pefixes - which leads btw to the effect of serial transmissions always being decimal - a 9.6 kbit line transfers 9600 bit per second, not 9830 :)



                          When computers came out of the closet in the 80s, people stated to recognize that these kilobytes aren't really a kilo bytes but different, so it was common to capitalize the K, as the SI prefix uses a lower case k. Nice idea - as long as memories stayed in the range of a few dozend to a few hundred KB, but when 1024 KB where reached, it broke, as the SI prefix M is already uppercase.



                          In the late 1980s/early 1990s it became obvious that there is a need for a clear meaning, so an international standard was proposed - and accepted in the late 1990s.



                          Soit'S now more than 20 years later ... heck, not even the Englishdid complain that long about the loss of their non decimal currency.




                          and the creation of the rather awkward "kibibytes".




                          The binary prefixes are anything but awkward. They offer an easy, convenient and well defined way to operate with (almost) the same prefixes than for any other unit, but now making it binary, which does make a lot of sense for computing, doesn't it?



                          Just make it always uppercase and add a lowercase 'i' and it's binary. No more confusion.






                          share|improve this answer

























                            2












                            2








                            2







                            These prefixes are part of the International System of Units - though, they have already been used before, since introduced with the metric system in 1795.



                            When Computers (and memory) became a thing, engineers used these prefixes in a similar Manner. After all,




                            So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes,




                            There was no redefinition, as 'kilobytes' isn't a unit of its own. It's 'kilo' as prefix with the fixed definition of 1000 and 'byte' as, well, a byte. So Kilobyte always meant 1000 bytes.



                            These prefixes are part of the International System of Units (SI) - though, they have already been used before, since introduced with the metric system in 1795. Within SI there are no different units for the same thing (like inch, feet, yard, furlongs, chains and miles for length) but always only own (meter for length, Ampere for current and so on). To use them in different circumstances they are prefixed according to powers of 10. A very convenient system.



                            And it were sloppy engineers that used this convenience to describe certain binary values with somewhat close values - as 2^10 is 1024 and thus close to 1000. These were never official units in any way, just a kludge to get along. Standard documents always used power of 10 pefixes - which leads btw to the effect of serial transmissions always being decimal - a 9.6 kbit line transfers 9600 bit per second, not 9830 :)



                            When computers came out of the closet in the 80s, people stated to recognize that these kilobytes aren't really a kilo bytes but different, so it was common to capitalize the K, as the SI prefix uses a lower case k. Nice idea - as long as memories stayed in the range of a few dozend to a few hundred KB, but when 1024 KB where reached, it broke, as the SI prefix M is already uppercase.



                            In the late 1980s/early 1990s it became obvious that there is a need for a clear meaning, so an international standard was proposed - and accepted in the late 1990s.



                            Soit'S now more than 20 years later ... heck, not even the Englishdid complain that long about the loss of their non decimal currency.




                            and the creation of the rather awkward "kibibytes".




                            The binary prefixes are anything but awkward. They offer an easy, convenient and well defined way to operate with (almost) the same prefixes than for any other unit, but now making it binary, which does make a lot of sense for computing, doesn't it?



                            Just make it always uppercase and add a lowercase 'i' and it's binary. No more confusion.






                            share|improve this answer













                            These prefixes are part of the International System of Units - though, they have already been used before, since introduced with the metric system in 1795.



                            When Computers (and memory) became a thing, engineers used these prefixes in a similar Manner. After all,




                            So I'm interested in the arguments used and the decision making process that lead the IEC to decide that kilobytes would be redefined as 1000 bytes,




                            There was no redefinition, as 'kilobytes' isn't a unit of its own. It's 'kilo' as prefix with the fixed definition of 1000 and 'byte' as, well, a byte. So Kilobyte always meant 1000 bytes.



                            These prefixes are part of the International System of Units (SI) - though, they have already been used before, since introduced with the metric system in 1795. Within SI there are no different units for the same thing (like inch, feet, yard, furlongs, chains and miles for length) but always only own (meter for length, Ampere for current and so on). To use them in different circumstances they are prefixed according to powers of 10. A very convenient system.



                            And it were sloppy engineers that used this convenience to describe certain binary values with somewhat close values - as 2^10 is 1024 and thus close to 1000. These were never official units in any way, just a kludge to get along. Standard documents always used power of 10 pefixes - which leads btw to the effect of serial transmissions always being decimal - a 9.6 kbit line transfers 9600 bit per second, not 9830 :)



                            When computers came out of the closet in the 80s, people stated to recognize that these kilobytes aren't really a kilo bytes but different, so it was common to capitalize the K, as the SI prefix uses a lower case k. Nice idea - as long as memories stayed in the range of a few dozend to a few hundred KB, but when 1024 KB where reached, it broke, as the SI prefix M is already uppercase.



                            In the late 1980s/early 1990s it became obvious that there is a need for a clear meaning, so an international standard was proposed - and accepted in the late 1990s.



                            Soit'S now more than 20 years later ... heck, not even the Englishdid complain that long about the loss of their non decimal currency.




                            and the creation of the rather awkward "kibibytes".




                            The binary prefixes are anything but awkward. They offer an easy, convenient and well defined way to operate with (almost) the same prefixes than for any other unit, but now making it binary, which does make a lot of sense for computing, doesn't it?



                            Just make it always uppercase and add a lowercase 'i' and it's binary. No more confusion.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 3 hours ago









                            RaffzahnRaffzahn

                            62.1k6 gold badges150 silver badges254 bronze badges




                            62.1k6 gold badges150 silver badges254 bronze badges



























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Retrocomputing Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f11563%2fhow-did-the-iec-decide-to-create-kibibytes%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                                Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                                199年 目錄 大件事 到箇年出世嗰人 到箇年死嗰人 節慶、風俗習慣 導覽選單