Why does Intel's Haswell chip allow multiplication to be twice as fast as addition?Why does digital equipment have more latency than analogue?Why does a voltmeter read lower across a load than across a supply?Why does a voltmeter show voltage dropWhy does the Intel Atom processor need so much thermal dissipation compared to similar ARM processorsWhy does my current in parallel circuit not add up when I check using a multimeter?Why does PCIe hotplug capability require hardware support?Why does hardware division take much longer than multiplication?Why does reading 1 byte from hard disk has the same speed as operation reading 1000 bytes?Why cant we increase chip area?
In the movie Harry Potter and the Order or the Phoenix, why didn't Mr. Filch succeed to open the Room of Requirement if it's what he needed?
French equivalent of "Make leaps and bounds"
Did WWII Japanese soldiers engage in cannibalism of their enemies?
Unexpected route on a flight from USA to Europe
Can a PC attack themselves with an unarmed strike?
Is it allowed and safe to carry a passenger / non-pilot in the front seat of a small general aviation airplane?
Purchased new computer from DELL with pre-installed Ubuntu. Won't boot. Should assume its an error from DELL?
How can I tell if a flight itinerary is fake
Did silent film actors actually say their lines or did they simply improvise “dialogue” while being filmed?
Independent table row spacing
How do I change the output voltage of the LM7805?
Arrange a list in ascending order by deleting list elements
Why do private jets such as Gulfstream fly higher than other civilian jets?
Could one become a successful researcher by writing some really good papers while being outside academia?
Does bottle color affect mold growth?
Casting Goblin Matron with Plague Engineer on the battlefield
Is multiplication of real numbers uniquely defined as being distributive over addition?
In a topological space if there exists a loop that cannot be contracted to a point does there exist a simple loop that cannot be contracted also?
Are there any financial disadvantages to living "below your means"?
Where to pee in London?
How to avoid ci-driven development..?
Our group keeps dying during the Lost Mine of Phandelver campaign. What are we doing wrong?
How to help new students accept function notation
Can we use other things than single-word verbs in our dialog tags?
Why does Intel's Haswell chip allow multiplication to be twice as fast as addition?
Why does digital equipment have more latency than analogue?Why does a voltmeter read lower across a load than across a supply?Why does a voltmeter show voltage dropWhy does the Intel Atom processor need so much thermal dissipation compared to similar ARM processorsWhy does my current in parallel circuit not add up when I check using a multimeter?Why does PCIe hotplug capability require hardware support?Why does hardware division take much longer than multiplication?Why does reading 1 byte from hard disk has the same speed as operation reading 1000 bytes?Why cant we increase chip area?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I was reading this very interesting question on SO:
https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-same-speed-as-addition-on-modern-cpu
One of the comments said:
"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."
Why is it that they would allow twice as much simultaneous multiplication compared to addition?
I'm new to EE stack exchange, so please excuse me if this is more appropriate for a different SE. It is more of a "hardware engineering" question than a general electrical engineering question, but certainly not a "software" question for SO or SU.
parallel hardware port intel calculator
$endgroup$
add a comment |
$begingroup$
I was reading this very interesting question on SO:
https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-same-speed-as-addition-on-modern-cpu
One of the comments said:
"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."
Why is it that they would allow twice as much simultaneous multiplication compared to addition?
I'm new to EE stack exchange, so please excuse me if this is more appropriate for a different SE. It is more of a "hardware engineering" question than a general electrical engineering question, but certainly not a "software" question for SO or SU.
parallel hardware port intel calculator
$endgroup$
$begingroup$
I suspect floating point multiplication might just take less die area.
$endgroup$
– DKNguyen
7 hours ago
1
$begingroup$
Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
$endgroup$
– user1271772
7 hours ago
1
$begingroup$
Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
$endgroup$
– 比尔盖子
6 hours ago
1
$begingroup$
FP multiplication is addition. See logarithms.
$endgroup$
– Janka
6 hours ago
add a comment |
$begingroup$
I was reading this very interesting question on SO:
https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-same-speed-as-addition-on-modern-cpu
One of the comments said:
"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."
Why is it that they would allow twice as much simultaneous multiplication compared to addition?
I'm new to EE stack exchange, so please excuse me if this is more appropriate for a different SE. It is more of a "hardware engineering" question than a general electrical engineering question, but certainly not a "software" question for SO or SU.
parallel hardware port intel calculator
$endgroup$
I was reading this very interesting question on SO:
https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-same-speed-as-addition-on-modern-cpu
One of the comments said:
"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."
Why is it that they would allow twice as much simultaneous multiplication compared to addition?
I'm new to EE stack exchange, so please excuse me if this is more appropriate for a different SE. It is more of a "hardware engineering" question than a general electrical engineering question, but certainly not a "software" question for SO or SU.
parallel hardware port intel calculator
parallel hardware port intel calculator
asked 8 hours ago
user1271772user1271772
1062 bronze badges
1062 bronze badges
$begingroup$
I suspect floating point multiplication might just take less die area.
$endgroup$
– DKNguyen
7 hours ago
1
$begingroup$
Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
$endgroup$
– user1271772
7 hours ago
1
$begingroup$
Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
$endgroup$
– 比尔盖子
6 hours ago
1
$begingroup$
FP multiplication is addition. See logarithms.
$endgroup$
– Janka
6 hours ago
add a comment |
$begingroup$
I suspect floating point multiplication might just take less die area.
$endgroup$
– DKNguyen
7 hours ago
1
$begingroup$
Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
$endgroup$
– user1271772
7 hours ago
1
$begingroup$
Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
$endgroup$
– 比尔盖子
6 hours ago
1
$begingroup$
FP multiplication is addition. See logarithms.
$endgroup$
– Janka
6 hours ago
$begingroup$
I suspect floating point multiplication might just take less die area.
$endgroup$
– DKNguyen
7 hours ago
$begingroup$
I suspect floating point multiplication might just take less die area.
$endgroup$
– DKNguyen
7 hours ago
1
1
$begingroup$
Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
$endgroup$
– user1271772
7 hours ago
$begingroup$
Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
$endgroup$
– user1271772
7 hours ago
1
1
$begingroup$
Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
$endgroup$
– 比尔盖子
6 hours ago
$begingroup$
Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
$endgroup$
– 比尔盖子
6 hours ago
1
1
$begingroup$
FP multiplication is addition. See logarithms.
$endgroup$
– Janka
6 hours ago
$begingroup$
FP multiplication is addition. See logarithms.
$endgroup$
– Janka
6 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
This possibly answers the title of the question, if not the body:
Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.
$endgroup$
$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago
$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago
$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago
$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago
$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago
|
show 1 more comment
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("schematics", function ()
StackExchange.schematics.init();
);
, "cicuitlab");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "135"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f452181%2fwhy-does-intels-haswell-chip-allow-multiplication-to-be-twice-as-fast-as-additi%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This possibly answers the title of the question, if not the body:
Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.
$endgroup$
$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago
$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago
$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago
$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago
$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago
|
show 1 more comment
$begingroup$
This possibly answers the title of the question, if not the body:
Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.
$endgroup$
$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago
$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago
$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago
$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago
$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago
|
show 1 more comment
$begingroup$
This possibly answers the title of the question, if not the body:
Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.
$endgroup$
This possibly answers the title of the question, if not the body:
Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.
edited 4 hours ago
answered 7 hours ago
hotpaw2hotpaw2
9602 gold badges16 silver badges27 bronze badges
9602 gold badges16 silver badges27 bronze badges
$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago
$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago
$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago
$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago
$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago
|
show 1 more comment
$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago
$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago
$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago
$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago
$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago
$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago
$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago
$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago
$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago
$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago
$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago
$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago
$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago
$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago
$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago
|
show 1 more comment
Thanks for contributing an answer to Electrical Engineering Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f452181%2fwhy-does-intels-haswell-chip-allow-multiplication-to-be-twice-as-fast-as-additi%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
I suspect floating point multiplication might just take less die area.
$endgroup$
– DKNguyen
7 hours ago
1
$begingroup$
Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
$endgroup$
– user1271772
7 hours ago
1
$begingroup$
Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
$endgroup$
– 比尔盖子
6 hours ago
1
$begingroup$
FP multiplication is addition. See logarithms.
$endgroup$
– Janka
6 hours ago