Why does Intel's Haswell chip allow multiplication to be twice as fast as addition?Why does digital equipment have more latency than analogue?Why does a voltmeter read lower across a load than across a supply?Why does a voltmeter show voltage dropWhy does the Intel Atom processor need so much thermal dissipation compared to similar ARM processorsWhy does my current in parallel circuit not add up when I check using a multimeter?Why does PCIe hotplug capability require hardware support?Why does hardware division take much longer than multiplication?Why does reading 1 byte from hard disk has the same speed as operation reading 1000 bytes?Why cant we increase chip area?

In the movie Harry Potter and the Order or the Phoenix, why didn't Mr. Filch succeed to open the Room of Requirement if it's what he needed?

French equivalent of "Make leaps and bounds"

Did WWII Japanese soldiers engage in cannibalism of their enemies?

Unexpected route on a flight from USA to Europe

Can a PC attack themselves with an unarmed strike?

Is it allowed and safe to carry a passenger / non-pilot in the front seat of a small general aviation airplane?

Purchased new computer from DELL with pre-installed Ubuntu. Won't boot. Should assume its an error from DELL?

How can I tell if a flight itinerary is fake

Did silent film actors actually say their lines or did they simply improvise “dialogue” while being filmed?

Independent table row spacing

How do I change the output voltage of the LM7805?

Arrange a list in ascending order by deleting list elements

Why do private jets such as Gulfstream fly higher than other civilian jets?

Could one become a successful researcher by writing some really good papers while being outside academia?

Does bottle color affect mold growth?

Casting Goblin Matron with Plague Engineer on the battlefield

Is multiplication of real numbers uniquely defined as being distributive over addition?

In a topological space if there exists a loop that cannot be contracted to a point does there exist a simple loop that cannot be contracted also?

Are there any financial disadvantages to living "below your means"?

Where to pee in London?

How to avoid ci-driven development..?

Our group keeps dying during the Lost Mine of Phandelver campaign. What are we doing wrong?

How to help new students accept function notation

Can we use other things than single-word verbs in our dialog tags?



Why does Intel's Haswell chip allow multiplication to be twice as fast as addition?


Why does digital equipment have more latency than analogue?Why does a voltmeter read lower across a load than across a supply?Why does a voltmeter show voltage dropWhy does the Intel Atom processor need so much thermal dissipation compared to similar ARM processorsWhy does my current in parallel circuit not add up when I check using a multimeter?Why does PCIe hotplug capability require hardware support?Why does hardware division take much longer than multiplication?Why does reading 1 byte from hard disk has the same speed as operation reading 1000 bytes?Why cant we increase chip area?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


I was reading this very interesting question on SO:



https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-same-speed-as-addition-on-modern-cpu



One of the comments said:




"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."




Why is it that they would allow twice as much simultaneous multiplication compared to addition?



I'm new to EE stack exchange, so please excuse me if this is more appropriate for a different SE. It is more of a "hardware engineering" question than a general electrical engineering question, but certainly not a "software" question for SO or SU.










share|improve this question









$endgroup$













  • $begingroup$
    I suspect floating point multiplication might just take less die area.
    $endgroup$
    – DKNguyen
    7 hours ago







  • 1




    $begingroup$
    Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
    $endgroup$
    – user1271772
    7 hours ago






  • 1




    $begingroup$
    Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
    $endgroup$
    – 比尔盖子
    6 hours ago






  • 1




    $begingroup$
    FP multiplication is addition. See logarithms.
    $endgroup$
    – Janka
    6 hours ago

















1












$begingroup$


I was reading this very interesting question on SO:



https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-same-speed-as-addition-on-modern-cpu



One of the comments said:




"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."




Why is it that they would allow twice as much simultaneous multiplication compared to addition?



I'm new to EE stack exchange, so please excuse me if this is more appropriate for a different SE. It is more of a "hardware engineering" question than a general electrical engineering question, but certainly not a "software" question for SO or SU.










share|improve this question









$endgroup$













  • $begingroup$
    I suspect floating point multiplication might just take less die area.
    $endgroup$
    – DKNguyen
    7 hours ago







  • 1




    $begingroup$
    Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
    $endgroup$
    – user1271772
    7 hours ago






  • 1




    $begingroup$
    Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
    $endgroup$
    – 比尔盖子
    6 hours ago






  • 1




    $begingroup$
    FP multiplication is addition. See logarithms.
    $endgroup$
    – Janka
    6 hours ago













1












1








1





$begingroup$


I was reading this very interesting question on SO:



https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-same-speed-as-addition-on-modern-cpu



One of the comments said:




"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."




Why is it that they would allow twice as much simultaneous multiplication compared to addition?



I'm new to EE stack exchange, so please excuse me if this is more appropriate for a different SE. It is more of a "hardware engineering" question than a general electrical engineering question, but certainly not a "software" question for SO or SU.










share|improve this question









$endgroup$




I was reading this very interesting question on SO:



https://stackoverflow.com/questions/21819682/is-integer-multiplication-really-same-speed-as-addition-on-modern-cpu



One of the comments said:




"It's worth nothing that on Haswell, the FP multiply throughput is
double that of FP add. That's because both ports 0 and 1 can be used
for multiply, but only port 1 can be used for addition. That said, you
can cheat with fused-multiply adds since both ports can do them."




Why is it that they would allow twice as much simultaneous multiplication compared to addition?



I'm new to EE stack exchange, so please excuse me if this is more appropriate for a different SE. It is more of a "hardware engineering" question than a general electrical engineering question, but certainly not a "software" question for SO or SU.







parallel hardware port intel calculator






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 8 hours ago









user1271772user1271772

1062 bronze badges




1062 bronze badges














  • $begingroup$
    I suspect floating point multiplication might just take less die area.
    $endgroup$
    – DKNguyen
    7 hours ago







  • 1




    $begingroup$
    Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
    $endgroup$
    – user1271772
    7 hours ago






  • 1




    $begingroup$
    Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
    $endgroup$
    – 比尔盖子
    6 hours ago






  • 1




    $begingroup$
    FP multiplication is addition. See logarithms.
    $endgroup$
    – Janka
    6 hours ago
















  • $begingroup$
    I suspect floating point multiplication might just take less die area.
    $endgroup$
    – DKNguyen
    7 hours ago







  • 1




    $begingroup$
    Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
    $endgroup$
    – user1271772
    7 hours ago






  • 1




    $begingroup$
    Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
    $endgroup$
    – 比尔盖子
    6 hours ago






  • 1




    $begingroup$
    FP multiplication is addition. See logarithms.
    $endgroup$
    – Janka
    6 hours ago















$begingroup$
I suspect floating point multiplication might just take less die area.
$endgroup$
– DKNguyen
7 hours ago





$begingroup$
I suspect floating point multiplication might just take less die area.
$endgroup$
– DKNguyen
7 hours ago





1




1




$begingroup$
Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
$endgroup$
– user1271772
7 hours ago




$begingroup$
Thank you @DKNguyen! But multiplication involves way more electronics than addition (in fact addition is the final step of multiplication, so whatever circuitry needed for multiplication will also include whatever is needed for addition), so I don't see how it can take up less die area!
$endgroup$
– user1271772
7 hours ago




1




1




$begingroup$
Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
$endgroup$
– 比尔盖子
6 hours ago




$begingroup$
Yes, it's the correct place to ask. You should add a "computer-architecture" tag to your question.
$endgroup$
– 比尔盖子
6 hours ago




1




1




$begingroup$
FP multiplication is addition. See logarithms.
$endgroup$
– Janka
6 hours ago




$begingroup$
FP multiplication is addition. See logarithms.
$endgroup$
– Janka
6 hours ago










1 Answer
1






active

oldest

votes


















6












$begingroup$

This possibly answers the title of the question, if not the body:



Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.






share|improve this answer











$endgroup$














  • $begingroup$
    This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
    $endgroup$
    – user1271772
    3 hours ago










  • $begingroup$
    @user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    @user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
    $endgroup$
    – hotpaw2
    2 hours ago










  • $begingroup$
    @The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
    $endgroup$
    – user1271772
    1 hour ago













Your Answer






StackExchange.ifUsing("editor", function ()
return StackExchange.using("schematics", function ()
StackExchange.schematics.init();
);
, "cicuitlab");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "135"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f452181%2fwhy-does-intels-haswell-chip-allow-multiplication-to-be-twice-as-fast-as-additi%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









6












$begingroup$

This possibly answers the title of the question, if not the body:



Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.






share|improve this answer











$endgroup$














  • $begingroup$
    This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
    $endgroup$
    – user1271772
    3 hours ago










  • $begingroup$
    @user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    @user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
    $endgroup$
    – hotpaw2
    2 hours ago










  • $begingroup$
    @The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
    $endgroup$
    – user1271772
    1 hour ago















6












$begingroup$

This possibly answers the title of the question, if not the body:



Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.






share|improve this answer











$endgroup$














  • $begingroup$
    This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
    $endgroup$
    – user1271772
    3 hours ago










  • $begingroup$
    @user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    @user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
    $endgroup$
    – hotpaw2
    2 hours ago










  • $begingroup$
    @The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
    $endgroup$
    – user1271772
    1 hour ago













6












6








6





$begingroup$

This possibly answers the title of the question, if not the body:



Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.






share|improve this answer











$endgroup$



This possibly answers the title of the question, if not the body:



Floating point addition requires aligning the two mantissa's before adding them (depending on the difference between the two exponents), potentially requiring a large variable amount of shift before the adder. Then renormalizing the result of the mantissa addition might be needed, potentially requiring another large variable amount of shift in order to properly format the floating point result. The two mantissa barrel shifters thus potentially require more gate delays, greater wire delays, or extra cycles that exceed the delay of a well compacted carry-save-adder-tree multiplier front end.







share|improve this answer














share|improve this answer



share|improve this answer








edited 4 hours ago

























answered 7 hours ago









hotpaw2hotpaw2

9602 gold badges16 silver badges27 bronze badges




9602 gold badges16 silver badges27 bronze badges














  • $begingroup$
    This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
    $endgroup$
    – user1271772
    3 hours ago










  • $begingroup$
    @user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    @user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
    $endgroup$
    – hotpaw2
    2 hours ago










  • $begingroup$
    @The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
    $endgroup$
    – user1271772
    1 hour ago
















  • $begingroup$
    This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
    $endgroup$
    – user1271772
    3 hours ago










  • $begingroup$
    @user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    @user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
    $endgroup$
    – The Photon
    2 hours ago










  • $begingroup$
    Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
    $endgroup$
    – hotpaw2
    2 hours ago










  • $begingroup$
    @The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
    $endgroup$
    – user1271772
    1 hour ago















$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago




$begingroup$
This is all very abstruse to me and seems quite esoteric. I have a PhD in Applied Mathematics and 10 years of post-PhD experience and yet had to look up "mantissa". What you're saying sounds like addition is more expensive than multiplication, but everywhere else I look, multiplication takes more clock cycles of latency time than addition. There's more to do in multiplication than addition, in fact multiplication involves an addition at the end, so all those "gate delays", "wire delays" and "extra cycles" that you say addition requires, should also be required for the last step of multiplying!
$endgroup$
– user1271772
3 hours ago












$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago




$begingroup$
@user1271772, integer multiplication certainly takes more resources (either time or gates) than integer addition. For floating point, everything is much more complicated. If you haven't heard the term mantissa you haven't gone very far in studying floating point computing.
$endgroup$
– The Photon
2 hours ago












$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago




$begingroup$
@user1271772, for a basic overview of modern (since 1990ish) floating point representation, google "what every computer scientist should know about floating point arithmetic".
$endgroup$
– The Photon
2 hours ago












$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago




$begingroup$
Read up on the works of William Kahan, who was a professor in both the Mathematics and EECS departments when I was at UC Berkeley.
$endgroup$
– hotpaw2
2 hours ago












$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago




$begingroup$
@The Photon: I came across that article in my first year of grad school more than 10 years ago, and I remember the title being very catchy, but I didn't read it. You are right that I haven't gone very far in studying floating point computing, I just use it a lot.
$endgroup$
– user1271772
1 hour ago

















draft saved

draft discarded
















































Thanks for contributing an answer to Electrical Engineering Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2felectronics.stackexchange.com%2fquestions%2f452181%2fwhy-does-intels-haswell-chip-allow-multiplication-to-be-twice-as-fast-as-additi%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

Tom Holland Mục lục Đầu đời và giáo dục | Sự nghiệp | Cuộc sống cá nhân | Phim tham gia | Giải thưởng và đề cử | Chú thích | Liên kết ngoài | Trình đơn chuyển hướngProfile“Person Details for Thomas Stanley Holland, "England and Wales Birth Registration Index, 1837-2008" — FamilySearch.org”"Meet Tom Holland... the 16-year-old star of The Impossible""Schoolboy actor Tom Holland finds himself in Oscar contention for role in tsunami drama"“Naomi Watts on the Prince William and Harry's reaction to her film about the late Princess Diana”lưu trữ"Holland and Pflueger Are West End's Two New 'Billy Elliots'""I'm so envious of my son, the movie star! British writer Dominic Holland's spent 20 years trying to crack Hollywood - but he's been beaten to it by a very unlikely rival"“Richard and Margaret Povey of Jersey, Channel Islands, UK: Information about Thomas Stanley Holland”"Tom Holland to play Billy Elliot""New Billy Elliot leaving the garage"Billy Elliot the Musical - Tom Holland - Billy"A Tale of four Billys: Tom Holland""The Feel Good Factor""Thames Christian College schoolboys join Myleene Klass for The Feelgood Factor""Government launches £600,000 arts bursaries pilot""BILLY's Chapman, Holland, Gardner & Jackson-Keen Visit Prime Minister""Elton John 'blown away' by Billy Elliot fifth birthday" (video with John's interview and fragments of Holland's performance)"First News interviews Arrietty's Tom Holland"“33rd Critics' Circle Film Awards winners”“National Board of Review Current Awards”Bản gốc"Ron Howard Whaling Tale 'In The Heart Of The Sea' Casts Tom Holland"“'Spider-Man' Finds Tom Holland to Star as New Web-Slinger”lưu trữ“Captain America: Civil War (2016)”“Film Review: ‘Captain America: Civil War’”lưu trữ“‘Captain America: Civil War’ review: Choose your own avenger”lưu trữ“The Lost City of Z reviews”“Sony Pictures and Marvel Studios Find Their 'Spider-Man' Star and Director”“‘Mary Magdalene’, ‘Current War’ & ‘Wind River’ Get 2017 Release Dates From Weinstein”“Lionsgate Unleashing Daisy Ridley & Tom Holland Starrer ‘Chaos Walking’ In Cannes”“PTA's 'Master' Leads Chicago Film Critics Nominations, UPDATED: Houston and Indiana Critics Nominations”“Nominaciones Goya 2013 Telecinco Cinema – ENG”“Jameson Empire Film Awards: Martin Freeman wins best actor for performance in The Hobbit”“34th Annual Young Artist Awards”Bản gốc“Teen Choice Awards 2016—Captain America: Civil War Leads Second Wave of Nominations”“BAFTA Film Award Nominations: ‘La La Land’ Leads Race”“Saturn Awards Nominations 2017: 'Rogue One,' 'Walking Dead' Lead”Tom HollandTom HollandTom HollandTom Hollandmedia.gettyimages.comWorldCat Identities300279794no20130442900000 0004 0355 42791085670554170004732cb16706349t(data)XX5557367