What is the hex versus octal timeline?What was the rationale behind 36 bit computer architectures?What computers had redefinable character sets?First flat-panel display technology capable of 640x480640x480 color display in 1980Timeline of progressive scan CRT resolutionsIBM 5153 monitor vertical resolutionProportional fonts on 8-bit computersWhy did the original Apple //e have two sets of inverse video characters?Did arcade monitors have same pixel aspect ratio as TV sets?Why did Super-VGA offer the 5:4 1280*1024 resolution?Why did computer video outputs go from digital to analog, then back to digital?
Potential new partner angry about first collaboration - how to answer email to close up this encounter in a graceful manner
What is the hex versus octal timeline?
Is it safe to remove the bottom chords of a series of garage roof trusses?
How to organize ideas to start writing a novel?
Why are delta bots so finicky?
Is there a known non-euclidean geometry where two concentric circles of different radii can intersect? (as in the novel "The Universe Between")
Shouldn't the "credit score" prevent Americans from going deeper and deeper into personal debt?
In an emergency, how do I find and share my position?
Are there any plans for handling people floating away during an EVA?
Is "stainless" a bulk or a surface property of stainless steel?
How can I run SQL Server Vulnerability Assessment from a SQL Job?
Can you grapple/shove with the Hunter Ranger's Whirlwind Attack?
Is refusing to concede in the face of an unstoppable Nexus combo punishable?
How much code would a codegolf golf if a codegolf could golf code?
Does adding the 'precise' tag to daggers break anything?
The teacher logged me in as administrator for doing a short task, is the whole system now compromised?
Why didn’t Doctor Strange stay in the original winning timeline?
Bug or undocumented behaviour in Intersection
Can you feel passing through the sound barrier in an F-16?
Thread-safe, Convenient and Performant Random Number Generator
How to avoid using System.String with Rfc2898DeriveBytes in C#
How to dismiss intrusive questions from a colleague with whom I don't work?
!I!n!s!e!r!t! !n!b!e!t!w!e!e!n!
Why is Boris Johnson visiting only Paris & Berlin if every member of the EU needs to agree on a withdrawal deal?
What is the hex versus octal timeline?
What was the rationale behind 36 bit computer architectures?What computers had redefinable character sets?First flat-panel display technology capable of 640x480640x480 color display in 1980Timeline of progressive scan CRT resolutionsIBM 5153 monitor vertical resolutionProportional fonts on 8-bit computersWhy did the original Apple //e have two sets of inverse video characters?Did arcade monitors have same pixel aspect ratio as TV sets?Why did Super-VGA offer the 5:4 1280*1024 resolution?Why did computer video outputs go from digital to analog, then back to digital?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
When and why did hexadecimal representation become more common than octal for displaying and printing out multi-bit binary fields?
display
|
show 4 more comments
When and why did hexadecimal representation become more common than octal for displaying and printing out multi-bit binary fields?
display
6
I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)
– dirkt
8 hours ago
Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes
– scruss
8 hours ago
In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.
– Walter Mitty
6 hours ago
In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.
– Walter Mitty
6 hours ago
1
I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)
– fadden
4 hours ago
|
show 4 more comments
When and why did hexadecimal representation become more common than octal for displaying and printing out multi-bit binary fields?
display
When and why did hexadecimal representation become more common than octal for displaying and printing out multi-bit binary fields?
display
display
asked 8 hours ago
hotpaw2hotpaw2
3,1906 silver badges25 bronze badges
3,1906 silver badges25 bronze badges
6
I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)
– dirkt
8 hours ago
Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes
– scruss
8 hours ago
In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.
– Walter Mitty
6 hours ago
In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.
– Walter Mitty
6 hours ago
1
I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)
– fadden
4 hours ago
|
show 4 more comments
6
I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)
– dirkt
8 hours ago
Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes
– scruss
8 hours ago
In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.
– Walter Mitty
6 hours ago
In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.
– Walter Mitty
6 hours ago
1
I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)
– fadden
4 hours ago
6
6
I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)
– dirkt
8 hours ago
I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)
– dirkt
8 hours ago
Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes
– scruss
8 hours ago
Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes
– scruss
8 hours ago
In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.
– Walter Mitty
6 hours ago
In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.
– Walter Mitty
6 hours ago
In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.
– Walter Mitty
6 hours ago
In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.
– Walter Mitty
6 hours ago
1
1
I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)
– fadden
4 hours ago
I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)
– fadden
4 hours ago
|
show 4 more comments
3 Answers
3
active
oldest
votes
Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.
PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was
1 bit generally byte/word indicator (b)
3 bits opcode (o)
3 bits source mode (s)
3 bits source register (r)
3 bits destination mode (d)
3 bits destination register (R)
making octal the perfect way to express it:
booosssrrrdddRRR
The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was
4 bits mode (m)
4 bits register (r)
thus hex was perfect to express these.
mmmmrrrr
Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".
add a comment |
Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.
Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.
You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.
add a comment |
When and why
That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.
Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?
After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.
See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f12101%2fwhat-is-the-hex-versus-octal-timeline%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.
PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was
1 bit generally byte/word indicator (b)
3 bits opcode (o)
3 bits source mode (s)
3 bits source register (r)
3 bits destination mode (d)
3 bits destination register (R)
making octal the perfect way to express it:
booosssrrrdddRRR
The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was
4 bits mode (m)
4 bits register (r)
thus hex was perfect to express these.
mmmmrrrr
Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".
add a comment |
Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.
PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was
1 bit generally byte/word indicator (b)
3 bits opcode (o)
3 bits source mode (s)
3 bits source register (r)
3 bits destination mode (d)
3 bits destination register (R)
making octal the perfect way to express it:
booosssrrrdddRRR
The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was
4 bits mode (m)
4 bits register (r)
thus hex was perfect to express these.
mmmmrrrr
Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".
add a comment |
Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.
PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was
1 bit generally byte/word indicator (b)
3 bits opcode (o)
3 bits source mode (s)
3 bits source register (r)
3 bits destination mode (d)
3 bits destination register (R)
making octal the perfect way to express it:
booosssrrrdddRRR
The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was
4 bits mode (m)
4 bits register (r)
thus hex was perfect to express these.
mmmmrrrr
Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".
Addressing the "why" part of the question - from my point of view as an assembly-code programmer on PDP-11 and VAX, the "standard" radix is most usefully chosen to match the instruction layout.
PDP-11 had 8 registers and 8 operand-mode indicators. Its double-operand instruction layout was
1 bit generally byte/word indicator (b)
3 bits opcode (o)
3 bits source mode (s)
3 bits source register (r)
3 bits destination mode (d)
3 bits destination register (R)
making octal the perfect way to express it:
booosssrrrdddRRR
The VAX, on the other hand, had 16 registers and 16 bits for operand mode (though some combinations were used for short literals). A basic operand specifier, in the variable-length instruction format was
4 bits mode (m)
4 bits register (r)
thus hex was perfect to express these.
mmmmrrrr
Of course, the larger address space used on VAX gives other advantages to hex: fewer characters in an address. This might have some bearing on "when".
edited 3 hours ago
answered 4 hours ago
another-daveanother-dave
2,8261 gold badge8 silver badges24 bronze badges
2,8261 gold badge8 silver badges24 bronze badges
add a comment |
add a comment |
Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.
Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.
You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.
add a comment |
Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.
Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.
You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.
add a comment |
Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.
Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.
You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.
Minicomputers and mainframes typically used octal, as many early mainframes had word sizes that were a multiple of 3 bits, and so did some minis. Operators and engineers within those environments became used to this, so even power-of-two word size minicomputers kept using octal.
Microcomputers, however, almost always had power-of-two word sizes for both address and data buses (or at least, a multiple of four bits), and there was a whole new generation of users who were not mentally locked into the mainframe/mini way of thinking. It was thus natural to start using hexadecimal instead.
You'll probably find, therefore, that hexadecimal rose to prominence about when microcomputers did, in the mid to late 1970s.
answered 4 hours ago
ChromatixChromatix
1,1199 silver badges8 bronze badges
1,1199 silver badges8 bronze badges
add a comment |
add a comment |
When and why
That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.
Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?
After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.
See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.
add a comment |
When and why
That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.
Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?
After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.
See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.
add a comment |
When and why
That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.
Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?
After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.
See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.
When and why
That's quite close tied to the IBM /360 and its introduction in 1964. The /360 is based on the use of an 8 bit byte, 32 bit word (16 bit half word) and 24 bit address. Thus all basic memory items were multiples of 8 bit units - which are, without any remainder, best be displayed in Hex.
Before that size of bytes, half words and words were (more often than not) multiples of 3, which works quite fine with octal. After all, grown up with decimal it's way less mental work to not use two numbers, than to learn six more. It seams more natural, doesn't it?
After that next to all new designs switched to 8 bit bytes to allow easy data exchange with IBM mainframes. This happened even faster for mini computers as they where usually supplementary systems to (/360ish) mainframes.
See also this question about the rational of 36 bit designs. While not a true duplicate, it's quite related here.
answered 3 hours ago
RaffzahnRaffzahn
67.2k6 gold badges166 silver badges278 bronze badges
67.2k6 gold badges166 silver badges278 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Retrocomputing Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f12101%2fwhat-is-the-hex-versus-octal-timeline%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
6
I'd say around the time when word sizes of 36, 18 and 12 bit transitioned to the multiple-of-8-bit sizes we have today. That should also explain the "why". :-)
– dirkt
8 hours ago
Yup. The 12-bit PDP-8 from the 60s has instructions that fit very nicely into 4 3-bit octal fields, but not so well into 8-bit bytes
– scruss
8 hours ago
In the world of IBM the transiton was between the 7090/7094 and the 360. Hex was much more suitable for the 360 than octal.
– Walter Mitty
6 hours ago
In DEC, even the PDP-11 culture was still using octal, even though it didn't fit very well. The VAX people used Hex.
– Walter Mitty
6 hours ago
1
I was a software intern at CDC back around 1990. I remember complaining about an octal dump to one of the senior engineers, who responded, "what else would you use, hex?" with scorn and disbelief. Having grown up with an Apple II, it was in fact exactly what I wanted. :-)
– fadden
4 hours ago