Why did they ever make smaller than full-frame sensors?What limits the size of digital imaging sensors?Why does increasing sensor size necessarily lead to lower silicon wafer utilization?Where does the price premium of full-frame come from?What is “angle of view” in photography?Is it possible to make a 35mm digital camera with the same size / weight / price as a 35mm film camera?Do full frame sensors gather more light than crop sensors?For digital sensors and in terms of imaging medium, is the minimum CoC equal to the size of 1 sensor pixel or 2? And why?Are full-frame cameras less forgiving with respect to camera shake?Where does the price premium of full-frame come from?Why does it seem like large sensors are necessary for good low-light performance?What practical advantages do expensive Nikon DSLRs give vs affordable ones?Is dust less of an issue with larger sensors?

Defining a function which returns a function pointer which also returns a function pointer without typedefs

Should I leave the first authourship of our paper to the student who did the project whereas I solved it?

What's the biggest organic molecule that could have a smell?

Is the union of a chain of elementary embeddings elementary?

Why the word "rain" is considered a verb if it is not possible to conjugate it?

What does "synoptic" mean in avionics?

Can I say "I have encrypted something" if I hash something?

Writing a love interest for my hero

Why is the T-1000 humanoid?

Kingdom Map and Travel Pace

Do Milankovitch Cycles fully explain climate change?

How can I fix a framing mistake so I can drywall?

Where can I get an anonymous Rav Kav card issued?

How do email clients "send later" without storing a password?

Are there any space probes or landers which regained communication after being lost?

How can I locate a missing person abroad?

Are the definite and indefinite integrals actually two different things? Where is the flaw in my understanding?

How to stabilise the bicycle seatpost and saddle when it is all the way up?

Might have gotten a coworker sick, should I address this?

Sol Ⅲ = Earth: What is the origin of this planetary naming scheme?

Job offer without any details but asking me to withdraw other applications - is it normal?

Do all humans have an identical nucleotide sequence for certain proteins, e.g haemoglobin?

Can I use ratchet straps to lift a dolly into a truck bed?

How do I politely hint customers to leave my store, without pretending to need leave store myself?



Why did they ever make smaller than full-frame sensors?


What limits the size of digital imaging sensors?Why does increasing sensor size necessarily lead to lower silicon wafer utilization?Where does the price premium of full-frame come from?What is “angle of view” in photography?Is it possible to make a 35mm digital camera with the same size / weight / price as a 35mm film camera?Do full frame sensors gather more light than crop sensors?For digital sensors and in terms of imaging medium, is the minimum CoC equal to the size of 1 sensor pixel or 2? And why?Are full-frame cameras less forgiving with respect to camera shake?Where does the price premium of full-frame come from?Why does it seem like large sensors are necessary for good low-light performance?What practical advantages do expensive Nikon DSLRs give vs affordable ones?Is dust less of an issue with larger sensors?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3















You will occasionally encounter articles about how awesome full-frame cameras are. Lot of that is probably over-enthusiasm over a new piece of equipment or simple marketing, but it seems to me that at least these things are true:



  • Sensor with a large area captures more light

  • Sensor with large individual pixels would have less noise

  • Larger sensor can fir much more pixels

The full-frame cameras are much more expensive. This is weird to me, since I had the impression that making electronics smaller is always harder, since you need more precise equipment.



That must've been even more important in the dawn of digital single lens cameras, many years ago.



So why was it chosen to make sensors smaller than is the film originally used in the cameras? AFAIK some lenses made for film cameras still work with some DSLRs, so why make the sensor different from the film?



Note that I'm interested more about history of the initial decision (since film frame size was the status quo, and DLSRs were expensive anyway), than the price difference.










share|improve this question


























  • Possible duplicate of Where does the price premium of full-frame come from?

    – Michael C
    16 hours ago






  • 7





    "So why was it chosen to make sensors smaller than is the film originally used in the cameras?" I have to quibble with your use of originally. There is nothing magical or special about 135 film frame size. Medium and large format photography used much larger frame sizes than 36mm x 24mm, and existed before 135. So the question could be, why was 135 frame size used in the first place? Why was any particular frame size used?

    – scottbb
    13 hours ago







  • 5





    why did they ever make smaller than large format sensors!?

    – szulat
    13 hours ago






  • 1





    @scottbb There definitely may be many incorrect assumptions in my question. My knowledge of photography is limited, which is why I ask questions in the first place.

    – Tomáš Zato
    13 hours ago






  • 3





    Understood, I didn't mean to discourage the asking of questions (that is the entire reason for a Q&A site). I just wanted to provide a perspective that what we think of as the reference size, and how we always compare everything to full frame, isn't necessarily because it's the optimal, natural, or pre-destined baseline size.

    – scottbb
    13 hours ago

















3















You will occasionally encounter articles about how awesome full-frame cameras are. Lot of that is probably over-enthusiasm over a new piece of equipment or simple marketing, but it seems to me that at least these things are true:



  • Sensor with a large area captures more light

  • Sensor with large individual pixels would have less noise

  • Larger sensor can fir much more pixels

The full-frame cameras are much more expensive. This is weird to me, since I had the impression that making electronics smaller is always harder, since you need more precise equipment.



That must've been even more important in the dawn of digital single lens cameras, many years ago.



So why was it chosen to make sensors smaller than is the film originally used in the cameras? AFAIK some lenses made for film cameras still work with some DSLRs, so why make the sensor different from the film?



Note that I'm interested more about history of the initial decision (since film frame size was the status quo, and DLSRs were expensive anyway), than the price difference.










share|improve this question


























  • Possible duplicate of Where does the price premium of full-frame come from?

    – Michael C
    16 hours ago






  • 7





    "So why was it chosen to make sensors smaller than is the film originally used in the cameras?" I have to quibble with your use of originally. There is nothing magical or special about 135 film frame size. Medium and large format photography used much larger frame sizes than 36mm x 24mm, and existed before 135. So the question could be, why was 135 frame size used in the first place? Why was any particular frame size used?

    – scottbb
    13 hours ago







  • 5





    why did they ever make smaller than large format sensors!?

    – szulat
    13 hours ago






  • 1





    @scottbb There definitely may be many incorrect assumptions in my question. My knowledge of photography is limited, which is why I ask questions in the first place.

    – Tomáš Zato
    13 hours ago






  • 3





    Understood, I didn't mean to discourage the asking of questions (that is the entire reason for a Q&A site). I just wanted to provide a perspective that what we think of as the reference size, and how we always compare everything to full frame, isn't necessarily because it's the optimal, natural, or pre-destined baseline size.

    – scottbb
    13 hours ago













3












3








3








You will occasionally encounter articles about how awesome full-frame cameras are. Lot of that is probably over-enthusiasm over a new piece of equipment or simple marketing, but it seems to me that at least these things are true:



  • Sensor with a large area captures more light

  • Sensor with large individual pixels would have less noise

  • Larger sensor can fir much more pixels

The full-frame cameras are much more expensive. This is weird to me, since I had the impression that making electronics smaller is always harder, since you need more precise equipment.



That must've been even more important in the dawn of digital single lens cameras, many years ago.



So why was it chosen to make sensors smaller than is the film originally used in the cameras? AFAIK some lenses made for film cameras still work with some DSLRs, so why make the sensor different from the film?



Note that I'm interested more about history of the initial decision (since film frame size was the status quo, and DLSRs were expensive anyway), than the price difference.










share|improve this question
















You will occasionally encounter articles about how awesome full-frame cameras are. Lot of that is probably over-enthusiasm over a new piece of equipment or simple marketing, but it seems to me that at least these things are true:



  • Sensor with a large area captures more light

  • Sensor with large individual pixels would have less noise

  • Larger sensor can fir much more pixels

The full-frame cameras are much more expensive. This is weird to me, since I had the impression that making electronics smaller is always harder, since you need more precise equipment.



That must've been even more important in the dawn of digital single lens cameras, many years ago.



So why was it chosen to make sensors smaller than is the film originally used in the cameras? AFAIK some lenses made for film cameras still work with some DSLRs, so why make the sensor different from the film?



Note that I'm interested more about history of the initial decision (since film frame size was the status quo, and DLSRs were expensive anyway), than the price difference.







dslr sensor-size full-frame history






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 16 hours ago







Tomáš Zato

















asked 16 hours ago









Tomáš ZatoTomáš Zato

15410 bronze badges




15410 bronze badges















  • Possible duplicate of Where does the price premium of full-frame come from?

    – Michael C
    16 hours ago






  • 7





    "So why was it chosen to make sensors smaller than is the film originally used in the cameras?" I have to quibble with your use of originally. There is nothing magical or special about 135 film frame size. Medium and large format photography used much larger frame sizes than 36mm x 24mm, and existed before 135. So the question could be, why was 135 frame size used in the first place? Why was any particular frame size used?

    – scottbb
    13 hours ago







  • 5





    why did they ever make smaller than large format sensors!?

    – szulat
    13 hours ago






  • 1





    @scottbb There definitely may be many incorrect assumptions in my question. My knowledge of photography is limited, which is why I ask questions in the first place.

    – Tomáš Zato
    13 hours ago






  • 3





    Understood, I didn't mean to discourage the asking of questions (that is the entire reason for a Q&A site). I just wanted to provide a perspective that what we think of as the reference size, and how we always compare everything to full frame, isn't necessarily because it's the optimal, natural, or pre-destined baseline size.

    – scottbb
    13 hours ago

















  • Possible duplicate of Where does the price premium of full-frame come from?

    – Michael C
    16 hours ago






  • 7





    "So why was it chosen to make sensors smaller than is the film originally used in the cameras?" I have to quibble with your use of originally. There is nothing magical or special about 135 film frame size. Medium and large format photography used much larger frame sizes than 36mm x 24mm, and existed before 135. So the question could be, why was 135 frame size used in the first place? Why was any particular frame size used?

    – scottbb
    13 hours ago







  • 5





    why did they ever make smaller than large format sensors!?

    – szulat
    13 hours ago






  • 1





    @scottbb There definitely may be many incorrect assumptions in my question. My knowledge of photography is limited, which is why I ask questions in the first place.

    – Tomáš Zato
    13 hours ago






  • 3





    Understood, I didn't mean to discourage the asking of questions (that is the entire reason for a Q&A site). I just wanted to provide a perspective that what we think of as the reference size, and how we always compare everything to full frame, isn't necessarily because it's the optimal, natural, or pre-destined baseline size.

    – scottbb
    13 hours ago
















Possible duplicate of Where does the price premium of full-frame come from?

– Michael C
16 hours ago





Possible duplicate of Where does the price premium of full-frame come from?

– Michael C
16 hours ago




7




7





"So why was it chosen to make sensors smaller than is the film originally used in the cameras?" I have to quibble with your use of originally. There is nothing magical or special about 135 film frame size. Medium and large format photography used much larger frame sizes than 36mm x 24mm, and existed before 135. So the question could be, why was 135 frame size used in the first place? Why was any particular frame size used?

– scottbb
13 hours ago






"So why was it chosen to make sensors smaller than is the film originally used in the cameras?" I have to quibble with your use of originally. There is nothing magical or special about 135 film frame size. Medium and large format photography used much larger frame sizes than 36mm x 24mm, and existed before 135. So the question could be, why was 135 frame size used in the first place? Why was any particular frame size used?

– scottbb
13 hours ago





5




5





why did they ever make smaller than large format sensors!?

– szulat
13 hours ago





why did they ever make smaller than large format sensors!?

– szulat
13 hours ago




1




1





@scottbb There definitely may be many incorrect assumptions in my question. My knowledge of photography is limited, which is why I ask questions in the first place.

– Tomáš Zato
13 hours ago





@scottbb There definitely may be many incorrect assumptions in my question. My knowledge of photography is limited, which is why I ask questions in the first place.

– Tomáš Zato
13 hours ago




3




3





Understood, I didn't mean to discourage the asking of questions (that is the entire reason for a Q&A site). I just wanted to provide a perspective that what we think of as the reference size, and how we always compare everything to full frame, isn't necessarily because it's the optimal, natural, or pre-destined baseline size.

– scottbb
13 hours ago





Understood, I didn't mean to discourage the asking of questions (that is the entire reason for a Q&A site). I just wanted to provide a perspective that what we think of as the reference size, and how we always compare everything to full frame, isn't necessarily because it's the optimal, natural, or pre-destined baseline size.

– scottbb
13 hours ago










5 Answers
5






active

oldest

votes


















12
















Making large semiconductor devices with no, or only a very small number, of defects is very hard. Smaller ones are much less demanding to make.



In particular the yield – the proportion of the ones you make which are usable – for semiconductors drops as you try and make them larger. To understand this properly needs statistics which I don't want to try and do on the fly, but it's easy to see why this happens from a simple example.



Let's say that you can make 24x36mm devices (ie, full frame), and you have a 90% chance of any device you make having a serious enough defect that it is scrap. Your yield of 24x36mm devices is 10%: you need to make 10 devices for every 1 you can sell. (manufacturers are shy about what their yields are, but this is not stupidly low).



So instead you use the same technology, with the same defect rate, to make 10x15mm sensors, of which you can fit four on each wafer you were using for the 24x36 ones, with room for cutting the substrate up after making them. 90% of the wafers are defective, but the defects are localised (this is what defects are like in reality), so you get 3 good sensors from the defective wafers. So your yield rate is now (1 * 4 + 9 * 3)/40 (10 wafers of which 9 are defective), which is 31/40 or about 77%. In fact it is a little lower as I've assumed wafers only have 0 or 1 defect, while in fact they have some probability of a defect per unit area (or something) and thus there will be wafers which yield less than 3 good sensors.



But the basic point is clear: by making a sensor which is smaller you have increased your yield by a factor of seven. This means your cost to make these smaller sensors is seven times lower.



(You can make an even more extreme case: let's assume that every 24x36mm wafer is defective, but none have more than one defect (again this is simplification). The yield for 24x36mm sensors is 0: you can't make them. But the yield for 10x15mm sensors is (somewhat less than, because of the multiple-defect case) 75%: you get 3 good ones from each wafer.)






share|improve this answer



























  • What you say is definitely consistent with the sensor prices and availability, but why is it so? I still cannot imagine how can it be easier to make a thing super tiny and harder to make it more towards macroscopic scale.

    – Tomáš Zato
    16 hours ago






  • 4





    Because sensor pixels aren't "small" in terms of our current manufacturing technologies - cutting edge for manufacturing (i.e. CPUs) is on the order of 10 nm. Sensor pixels are of the order of 1 µm or 100 times bigger - at that point, making things 1.6x smaller is insignificant in terms of cost, and you get approximately 2.5x as many chips out of a wafer.

    – Philip Kendall
    16 hours ago











  • @PhilipKendall So the clean wafers are quite expensive then?

    – Tomáš Zato
    16 hours ago






  • 2





    So is processing them - the problem is, ten defects spread over a wafer with 2000 small chips or over a wafer with 11 big chips, in both cases mean you can throw 10 chips in the garbage. Let it be 100 defects - and you get a lot of chips in the first case, and a lot of all-garbage wafers in the second.

    – rackandboneman
    14 hours ago







  • 1





    Also, whyever it is that way, the kind of packaging usually used (and needed, probably for precise alignment and the possibility of a glass window) for image sensors (ceramic and gold stuff, like on computer CPUs of earlier decades) is expensive enough that it is usually avoided for everything except hard core aerospace and military parts these days. It probably does not get cheaper for larger packages.

    – rackandboneman
    14 hours ago


















6
















The first mainstream applications for electronic image sensors (be it Image-Orthicons, Vidicons, Plumbicons, or CCDs, or CMOS active pixel sensors, be it analog-electronic or digital workflows) were in video, not in still images.



Video followed form factors similar to movie film. In movie film, 35mm (equivalent to full frame still) or even 70mm were exotically large formats only used for actual (cinematic) movie production due to significant costs.



Also, the resolution demands for most video applications used to be much smaller - if pre-HD home televisions (maximum resolution 625 lines of maybe a 1000 pixels each) were the major target, high resolution capabilities were not necessary.



Also, in the non-cinema moving image world the demands on lenses appear to be different - much more expectations on lens speed and zoom range, much less on image quality. This can be done far more cost effective with lens designs that only have to service a small image circle.



Digital still cameras existed several years before interchangeable lens cameras became plausible, and these used tiny sensors first that were very likely designed for or based on designs for video.



APS-C sized sensors were HUGE compared to a normal digital camera sensor when early DSLRs were introduced; the few early full frame DSLRs (think Kodak DCS) and their sensors were extremely expensive, probably because there was very little design experience in making economical sensors in that size.



Image sensors are very coarse in actual structure compared to what CPUs or memory chips even in the 1990s used - for example, a common CPU for late 1990s desktop computers used 250nm feature size, which is quite smaller than what would even be physically useful on a visible-light imaging sensor. Today, 14nm (!!) is about state of the art.



The necessity to avoid large die sizes per part, regardless of the structure sizes, as already explained in other posts, has not changed much.






share|improve this answer

























  • Beautiful answer, and more precisely explains the specific reasoning behind it for DSLR cameras as opposed to wafer lithography in general (as the other answers do). Have all the upvotes.

    – Doktor J
    7 hours ago


















2
















Big sensors cost more than small sensors for more-or-less the same reason that big TVs cost more than small TVs. Compare a 30-inch TV and a 60-inch TV (about 75cm and 150cm, if you prefer). Miniaturization is no problem — we could make all of the parts of the 30-inch TV way smaller without running into any difficulty. The 30-inch TV costs less to make than the 60-inch TV because it uses less materials and requires less work to finish. And the 60-inch TV will have a higher defect rate — 4 times the area means much higher the chances that something goes wrong somewhere on the screen, creating a dead pixel. Because customers hate dead pixels, a panel that has more than one or two (or maybe even more than zero) gets scrapped, or sold as part of a lower-cost product. The production costs for defective units get rolled into the price of the acceptable units that are sold, so the bigger you go, the more expensive things get.



The same considerations apply to image sensors. Even the smallest sensors on prosumer cameras have features that are huge compared to what semiconductor technology is capable of, so the cost of miniaturization isn't a major factor. Compact cameras and cell phones normally use far smaller sensors, and even budget phones normally have two cameras, with fancier ones having three or four! For reasonable sizes, smaller costs less, not more. The defect issue also comes into play. The bigger you make the sensor, the more likely you will have a defect that requires you to scrap the whole thing, and the more money (in materials) you will lose when you do scrap it. That drives cost up with size, dramatically beyond a certain point.



The largest-format digital camera you can get as of this writing has a whopping 9"x11" sensor (that's more than 8 times the diagonal of a "full frame" sensor, or more than 64 times the area), and it only has 12 megapixels so obviously miniaturization isn't an issue — those pixels are huge. It retails for over $100,000.






share|improve this answer


































    0
















    Smaller sensors have higher production yields, and the electronics to process are lower cost.



    Double the sensor, and roughly square the processing power needed.



    The reality is that DX sensors are often higher resolution and greater dynamic range than films they are replacing.






    share|improve this answer
































      0
















      Because you specifically asked about history...



      I'd suggest: size, weight, & cost.



      All those considerations were equally true in the pre-digital (ie film) days. A popular film format was the 110 size. See:
      https://en.wikipedia.org/wiki/110_film



      The 110 film was cheaper, the cameras were cheaper, and many of the cameras were a lot smaller and lighter than the smallest 35mm film compacts. They could fit very easily in a small pocket. Of course those same constraints exist today with digital cameras, as others have pointed out. So it's not just small and big image sensors today; it was also small and big film formats back then as well.






      share|improve this answer








      New contributor



      Frank Van Hooft is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.























        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "61"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );














        draft saved

        draft discarded
















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f110913%2fwhy-did-they-ever-make-smaller-than-full-frame-sensors%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        5 Answers
        5






        active

        oldest

        votes








        5 Answers
        5






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        12
















        Making large semiconductor devices with no, or only a very small number, of defects is very hard. Smaller ones are much less demanding to make.



        In particular the yield – the proportion of the ones you make which are usable – for semiconductors drops as you try and make them larger. To understand this properly needs statistics which I don't want to try and do on the fly, but it's easy to see why this happens from a simple example.



        Let's say that you can make 24x36mm devices (ie, full frame), and you have a 90% chance of any device you make having a serious enough defect that it is scrap. Your yield of 24x36mm devices is 10%: you need to make 10 devices for every 1 you can sell. (manufacturers are shy about what their yields are, but this is not stupidly low).



        So instead you use the same technology, with the same defect rate, to make 10x15mm sensors, of which you can fit four on each wafer you were using for the 24x36 ones, with room for cutting the substrate up after making them. 90% of the wafers are defective, but the defects are localised (this is what defects are like in reality), so you get 3 good sensors from the defective wafers. So your yield rate is now (1 * 4 + 9 * 3)/40 (10 wafers of which 9 are defective), which is 31/40 or about 77%. In fact it is a little lower as I've assumed wafers only have 0 or 1 defect, while in fact they have some probability of a defect per unit area (or something) and thus there will be wafers which yield less than 3 good sensors.



        But the basic point is clear: by making a sensor which is smaller you have increased your yield by a factor of seven. This means your cost to make these smaller sensors is seven times lower.



        (You can make an even more extreme case: let's assume that every 24x36mm wafer is defective, but none have more than one defect (again this is simplification). The yield for 24x36mm sensors is 0: you can't make them. But the yield for 10x15mm sensors is (somewhat less than, because of the multiple-defect case) 75%: you get 3 good ones from each wafer.)






        share|improve this answer



























        • What you say is definitely consistent with the sensor prices and availability, but why is it so? I still cannot imagine how can it be easier to make a thing super tiny and harder to make it more towards macroscopic scale.

          – Tomáš Zato
          16 hours ago






        • 4





          Because sensor pixels aren't "small" in terms of our current manufacturing technologies - cutting edge for manufacturing (i.e. CPUs) is on the order of 10 nm. Sensor pixels are of the order of 1 µm or 100 times bigger - at that point, making things 1.6x smaller is insignificant in terms of cost, and you get approximately 2.5x as many chips out of a wafer.

          – Philip Kendall
          16 hours ago











        • @PhilipKendall So the clean wafers are quite expensive then?

          – Tomáš Zato
          16 hours ago






        • 2





          So is processing them - the problem is, ten defects spread over a wafer with 2000 small chips or over a wafer with 11 big chips, in both cases mean you can throw 10 chips in the garbage. Let it be 100 defects - and you get a lot of chips in the first case, and a lot of all-garbage wafers in the second.

          – rackandboneman
          14 hours ago







        • 1





          Also, whyever it is that way, the kind of packaging usually used (and needed, probably for precise alignment and the possibility of a glass window) for image sensors (ceramic and gold stuff, like on computer CPUs of earlier decades) is expensive enough that it is usually avoided for everything except hard core aerospace and military parts these days. It probably does not get cheaper for larger packages.

          – rackandboneman
          14 hours ago















        12
















        Making large semiconductor devices with no, or only a very small number, of defects is very hard. Smaller ones are much less demanding to make.



        In particular the yield – the proportion of the ones you make which are usable – for semiconductors drops as you try and make them larger. To understand this properly needs statistics which I don't want to try and do on the fly, but it's easy to see why this happens from a simple example.



        Let's say that you can make 24x36mm devices (ie, full frame), and you have a 90% chance of any device you make having a serious enough defect that it is scrap. Your yield of 24x36mm devices is 10%: you need to make 10 devices for every 1 you can sell. (manufacturers are shy about what their yields are, but this is not stupidly low).



        So instead you use the same technology, with the same defect rate, to make 10x15mm sensors, of which you can fit four on each wafer you were using for the 24x36 ones, with room for cutting the substrate up after making them. 90% of the wafers are defective, but the defects are localised (this is what defects are like in reality), so you get 3 good sensors from the defective wafers. So your yield rate is now (1 * 4 + 9 * 3)/40 (10 wafers of which 9 are defective), which is 31/40 or about 77%. In fact it is a little lower as I've assumed wafers only have 0 or 1 defect, while in fact they have some probability of a defect per unit area (or something) and thus there will be wafers which yield less than 3 good sensors.



        But the basic point is clear: by making a sensor which is smaller you have increased your yield by a factor of seven. This means your cost to make these smaller sensors is seven times lower.



        (You can make an even more extreme case: let's assume that every 24x36mm wafer is defective, but none have more than one defect (again this is simplification). The yield for 24x36mm sensors is 0: you can't make them. But the yield for 10x15mm sensors is (somewhat less than, because of the multiple-defect case) 75%: you get 3 good ones from each wafer.)






        share|improve this answer



























        • What you say is definitely consistent with the sensor prices and availability, but why is it so? I still cannot imagine how can it be easier to make a thing super tiny and harder to make it more towards macroscopic scale.

          – Tomáš Zato
          16 hours ago






        • 4





          Because sensor pixels aren't "small" in terms of our current manufacturing technologies - cutting edge for manufacturing (i.e. CPUs) is on the order of 10 nm. Sensor pixels are of the order of 1 µm or 100 times bigger - at that point, making things 1.6x smaller is insignificant in terms of cost, and you get approximately 2.5x as many chips out of a wafer.

          – Philip Kendall
          16 hours ago











        • @PhilipKendall So the clean wafers are quite expensive then?

          – Tomáš Zato
          16 hours ago






        • 2





          So is processing them - the problem is, ten defects spread over a wafer with 2000 small chips or over a wafer with 11 big chips, in both cases mean you can throw 10 chips in the garbage. Let it be 100 defects - and you get a lot of chips in the first case, and a lot of all-garbage wafers in the second.

          – rackandboneman
          14 hours ago







        • 1





          Also, whyever it is that way, the kind of packaging usually used (and needed, probably for precise alignment and the possibility of a glass window) for image sensors (ceramic and gold stuff, like on computer CPUs of earlier decades) is expensive enough that it is usually avoided for everything except hard core aerospace and military parts these days. It probably does not get cheaper for larger packages.

          – rackandboneman
          14 hours ago













        12














        12










        12









        Making large semiconductor devices with no, or only a very small number, of defects is very hard. Smaller ones are much less demanding to make.



        In particular the yield – the proportion of the ones you make which are usable – for semiconductors drops as you try and make them larger. To understand this properly needs statistics which I don't want to try and do on the fly, but it's easy to see why this happens from a simple example.



        Let's say that you can make 24x36mm devices (ie, full frame), and you have a 90% chance of any device you make having a serious enough defect that it is scrap. Your yield of 24x36mm devices is 10%: you need to make 10 devices for every 1 you can sell. (manufacturers are shy about what their yields are, but this is not stupidly low).



        So instead you use the same technology, with the same defect rate, to make 10x15mm sensors, of which you can fit four on each wafer you were using for the 24x36 ones, with room for cutting the substrate up after making them. 90% of the wafers are defective, but the defects are localised (this is what defects are like in reality), so you get 3 good sensors from the defective wafers. So your yield rate is now (1 * 4 + 9 * 3)/40 (10 wafers of which 9 are defective), which is 31/40 or about 77%. In fact it is a little lower as I've assumed wafers only have 0 or 1 defect, while in fact they have some probability of a defect per unit area (or something) and thus there will be wafers which yield less than 3 good sensors.



        But the basic point is clear: by making a sensor which is smaller you have increased your yield by a factor of seven. This means your cost to make these smaller sensors is seven times lower.



        (You can make an even more extreme case: let's assume that every 24x36mm wafer is defective, but none have more than one defect (again this is simplification). The yield for 24x36mm sensors is 0: you can't make them. But the yield for 10x15mm sensors is (somewhat less than, because of the multiple-defect case) 75%: you get 3 good ones from each wafer.)






        share|improve this answer















        Making large semiconductor devices with no, or only a very small number, of defects is very hard. Smaller ones are much less demanding to make.



        In particular the yield – the proportion of the ones you make which are usable – for semiconductors drops as you try and make them larger. To understand this properly needs statistics which I don't want to try and do on the fly, but it's easy to see why this happens from a simple example.



        Let's say that you can make 24x36mm devices (ie, full frame), and you have a 90% chance of any device you make having a serious enough defect that it is scrap. Your yield of 24x36mm devices is 10%: you need to make 10 devices for every 1 you can sell. (manufacturers are shy about what their yields are, but this is not stupidly low).



        So instead you use the same technology, with the same defect rate, to make 10x15mm sensors, of which you can fit four on each wafer you were using for the 24x36 ones, with room for cutting the substrate up after making them. 90% of the wafers are defective, but the defects are localised (this is what defects are like in reality), so you get 3 good sensors from the defective wafers. So your yield rate is now (1 * 4 + 9 * 3)/40 (10 wafers of which 9 are defective), which is 31/40 or about 77%. In fact it is a little lower as I've assumed wafers only have 0 or 1 defect, while in fact they have some probability of a defect per unit area (or something) and thus there will be wafers which yield less than 3 good sensors.



        But the basic point is clear: by making a sensor which is smaller you have increased your yield by a factor of seven. This means your cost to make these smaller sensors is seven times lower.



        (You can make an even more extreme case: let's assume that every 24x36mm wafer is defective, but none have more than one defect (again this is simplification). The yield for 24x36mm sensors is 0: you can't make them. But the yield for 10x15mm sensors is (somewhat less than, because of the multiple-defect case) 75%: you get 3 good ones from each wafer.)







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 13 hours ago

























        answered 16 hours ago









        tfbtfb

        3,5727 silver badges19 bronze badges




        3,5727 silver badges19 bronze badges















        • What you say is definitely consistent with the sensor prices and availability, but why is it so? I still cannot imagine how can it be easier to make a thing super tiny and harder to make it more towards macroscopic scale.

          – Tomáš Zato
          16 hours ago






        • 4





          Because sensor pixels aren't "small" in terms of our current manufacturing technologies - cutting edge for manufacturing (i.e. CPUs) is on the order of 10 nm. Sensor pixels are of the order of 1 µm or 100 times bigger - at that point, making things 1.6x smaller is insignificant in terms of cost, and you get approximately 2.5x as many chips out of a wafer.

          – Philip Kendall
          16 hours ago











        • @PhilipKendall So the clean wafers are quite expensive then?

          – Tomáš Zato
          16 hours ago






        • 2





          So is processing them - the problem is, ten defects spread over a wafer with 2000 small chips or over a wafer with 11 big chips, in both cases mean you can throw 10 chips in the garbage. Let it be 100 defects - and you get a lot of chips in the first case, and a lot of all-garbage wafers in the second.

          – rackandboneman
          14 hours ago







        • 1





          Also, whyever it is that way, the kind of packaging usually used (and needed, probably for precise alignment and the possibility of a glass window) for image sensors (ceramic and gold stuff, like on computer CPUs of earlier decades) is expensive enough that it is usually avoided for everything except hard core aerospace and military parts these days. It probably does not get cheaper for larger packages.

          – rackandboneman
          14 hours ago

















        • What you say is definitely consistent with the sensor prices and availability, but why is it so? I still cannot imagine how can it be easier to make a thing super tiny and harder to make it more towards macroscopic scale.

          – Tomáš Zato
          16 hours ago






        • 4





          Because sensor pixels aren't "small" in terms of our current manufacturing technologies - cutting edge for manufacturing (i.e. CPUs) is on the order of 10 nm. Sensor pixels are of the order of 1 µm or 100 times bigger - at that point, making things 1.6x smaller is insignificant in terms of cost, and you get approximately 2.5x as many chips out of a wafer.

          – Philip Kendall
          16 hours ago











        • @PhilipKendall So the clean wafers are quite expensive then?

          – Tomáš Zato
          16 hours ago






        • 2





          So is processing them - the problem is, ten defects spread over a wafer with 2000 small chips or over a wafer with 11 big chips, in both cases mean you can throw 10 chips in the garbage. Let it be 100 defects - and you get a lot of chips in the first case, and a lot of all-garbage wafers in the second.

          – rackandboneman
          14 hours ago







        • 1





          Also, whyever it is that way, the kind of packaging usually used (and needed, probably for precise alignment and the possibility of a glass window) for image sensors (ceramic and gold stuff, like on computer CPUs of earlier decades) is expensive enough that it is usually avoided for everything except hard core aerospace and military parts these days. It probably does not get cheaper for larger packages.

          – rackandboneman
          14 hours ago
















        What you say is definitely consistent with the sensor prices and availability, but why is it so? I still cannot imagine how can it be easier to make a thing super tiny and harder to make it more towards macroscopic scale.

        – Tomáš Zato
        16 hours ago





        What you say is definitely consistent with the sensor prices and availability, but why is it so? I still cannot imagine how can it be easier to make a thing super tiny and harder to make it more towards macroscopic scale.

        – Tomáš Zato
        16 hours ago




        4




        4





        Because sensor pixels aren't "small" in terms of our current manufacturing technologies - cutting edge for manufacturing (i.e. CPUs) is on the order of 10 nm. Sensor pixels are of the order of 1 µm or 100 times bigger - at that point, making things 1.6x smaller is insignificant in terms of cost, and you get approximately 2.5x as many chips out of a wafer.

        – Philip Kendall
        16 hours ago





        Because sensor pixels aren't "small" in terms of our current manufacturing technologies - cutting edge for manufacturing (i.e. CPUs) is on the order of 10 nm. Sensor pixels are of the order of 1 µm or 100 times bigger - at that point, making things 1.6x smaller is insignificant in terms of cost, and you get approximately 2.5x as many chips out of a wafer.

        – Philip Kendall
        16 hours ago













        @PhilipKendall So the clean wafers are quite expensive then?

        – Tomáš Zato
        16 hours ago





        @PhilipKendall So the clean wafers are quite expensive then?

        – Tomáš Zato
        16 hours ago




        2




        2





        So is processing them - the problem is, ten defects spread over a wafer with 2000 small chips or over a wafer with 11 big chips, in both cases mean you can throw 10 chips in the garbage. Let it be 100 defects - and you get a lot of chips in the first case, and a lot of all-garbage wafers in the second.

        – rackandboneman
        14 hours ago






        So is processing them - the problem is, ten defects spread over a wafer with 2000 small chips or over a wafer with 11 big chips, in both cases mean you can throw 10 chips in the garbage. Let it be 100 defects - and you get a lot of chips in the first case, and a lot of all-garbage wafers in the second.

        – rackandboneman
        14 hours ago





        1




        1





        Also, whyever it is that way, the kind of packaging usually used (and needed, probably for precise alignment and the possibility of a glass window) for image sensors (ceramic and gold stuff, like on computer CPUs of earlier decades) is expensive enough that it is usually avoided for everything except hard core aerospace and military parts these days. It probably does not get cheaper for larger packages.

        – rackandboneman
        14 hours ago





        Also, whyever it is that way, the kind of packaging usually used (and needed, probably for precise alignment and the possibility of a glass window) for image sensors (ceramic and gold stuff, like on computer CPUs of earlier decades) is expensive enough that it is usually avoided for everything except hard core aerospace and military parts these days. It probably does not get cheaper for larger packages.

        – rackandboneman
        14 hours ago













        6
















        The first mainstream applications for electronic image sensors (be it Image-Orthicons, Vidicons, Plumbicons, or CCDs, or CMOS active pixel sensors, be it analog-electronic or digital workflows) were in video, not in still images.



        Video followed form factors similar to movie film. In movie film, 35mm (equivalent to full frame still) or even 70mm were exotically large formats only used for actual (cinematic) movie production due to significant costs.



        Also, the resolution demands for most video applications used to be much smaller - if pre-HD home televisions (maximum resolution 625 lines of maybe a 1000 pixels each) were the major target, high resolution capabilities were not necessary.



        Also, in the non-cinema moving image world the demands on lenses appear to be different - much more expectations on lens speed and zoom range, much less on image quality. This can be done far more cost effective with lens designs that only have to service a small image circle.



        Digital still cameras existed several years before interchangeable lens cameras became plausible, and these used tiny sensors first that were very likely designed for or based on designs for video.



        APS-C sized sensors were HUGE compared to a normal digital camera sensor when early DSLRs were introduced; the few early full frame DSLRs (think Kodak DCS) and their sensors were extremely expensive, probably because there was very little design experience in making economical sensors in that size.



        Image sensors are very coarse in actual structure compared to what CPUs or memory chips even in the 1990s used - for example, a common CPU for late 1990s desktop computers used 250nm feature size, which is quite smaller than what would even be physically useful on a visible-light imaging sensor. Today, 14nm (!!) is about state of the art.



        The necessity to avoid large die sizes per part, regardless of the structure sizes, as already explained in other posts, has not changed much.






        share|improve this answer

























        • Beautiful answer, and more precisely explains the specific reasoning behind it for DSLR cameras as opposed to wafer lithography in general (as the other answers do). Have all the upvotes.

          – Doktor J
          7 hours ago















        6
















        The first mainstream applications for electronic image sensors (be it Image-Orthicons, Vidicons, Plumbicons, or CCDs, or CMOS active pixel sensors, be it analog-electronic or digital workflows) were in video, not in still images.



        Video followed form factors similar to movie film. In movie film, 35mm (equivalent to full frame still) or even 70mm were exotically large formats only used for actual (cinematic) movie production due to significant costs.



        Also, the resolution demands for most video applications used to be much smaller - if pre-HD home televisions (maximum resolution 625 lines of maybe a 1000 pixels each) were the major target, high resolution capabilities were not necessary.



        Also, in the non-cinema moving image world the demands on lenses appear to be different - much more expectations on lens speed and zoom range, much less on image quality. This can be done far more cost effective with lens designs that only have to service a small image circle.



        Digital still cameras existed several years before interchangeable lens cameras became plausible, and these used tiny sensors first that were very likely designed for or based on designs for video.



        APS-C sized sensors were HUGE compared to a normal digital camera sensor when early DSLRs were introduced; the few early full frame DSLRs (think Kodak DCS) and their sensors were extremely expensive, probably because there was very little design experience in making economical sensors in that size.



        Image sensors are very coarse in actual structure compared to what CPUs or memory chips even in the 1990s used - for example, a common CPU for late 1990s desktop computers used 250nm feature size, which is quite smaller than what would even be physically useful on a visible-light imaging sensor. Today, 14nm (!!) is about state of the art.



        The necessity to avoid large die sizes per part, regardless of the structure sizes, as already explained in other posts, has not changed much.






        share|improve this answer

























        • Beautiful answer, and more precisely explains the specific reasoning behind it for DSLR cameras as opposed to wafer lithography in general (as the other answers do). Have all the upvotes.

          – Doktor J
          7 hours ago













        6














        6










        6









        The first mainstream applications for electronic image sensors (be it Image-Orthicons, Vidicons, Plumbicons, or CCDs, or CMOS active pixel sensors, be it analog-electronic or digital workflows) were in video, not in still images.



        Video followed form factors similar to movie film. In movie film, 35mm (equivalent to full frame still) or even 70mm were exotically large formats only used for actual (cinematic) movie production due to significant costs.



        Also, the resolution demands for most video applications used to be much smaller - if pre-HD home televisions (maximum resolution 625 lines of maybe a 1000 pixels each) were the major target, high resolution capabilities were not necessary.



        Also, in the non-cinema moving image world the demands on lenses appear to be different - much more expectations on lens speed and zoom range, much less on image quality. This can be done far more cost effective with lens designs that only have to service a small image circle.



        Digital still cameras existed several years before interchangeable lens cameras became plausible, and these used tiny sensors first that were very likely designed for or based on designs for video.



        APS-C sized sensors were HUGE compared to a normal digital camera sensor when early DSLRs were introduced; the few early full frame DSLRs (think Kodak DCS) and their sensors were extremely expensive, probably because there was very little design experience in making economical sensors in that size.



        Image sensors are very coarse in actual structure compared to what CPUs or memory chips even in the 1990s used - for example, a common CPU for late 1990s desktop computers used 250nm feature size, which is quite smaller than what would even be physically useful on a visible-light imaging sensor. Today, 14nm (!!) is about state of the art.



        The necessity to avoid large die sizes per part, regardless of the structure sizes, as already explained in other posts, has not changed much.






        share|improve this answer













        The first mainstream applications for electronic image sensors (be it Image-Orthicons, Vidicons, Plumbicons, or CCDs, or CMOS active pixel sensors, be it analog-electronic or digital workflows) were in video, not in still images.



        Video followed form factors similar to movie film. In movie film, 35mm (equivalent to full frame still) or even 70mm were exotically large formats only used for actual (cinematic) movie production due to significant costs.



        Also, the resolution demands for most video applications used to be much smaller - if pre-HD home televisions (maximum resolution 625 lines of maybe a 1000 pixels each) were the major target, high resolution capabilities were not necessary.



        Also, in the non-cinema moving image world the demands on lenses appear to be different - much more expectations on lens speed and zoom range, much less on image quality. This can be done far more cost effective with lens designs that only have to service a small image circle.



        Digital still cameras existed several years before interchangeable lens cameras became plausible, and these used tiny sensors first that were very likely designed for or based on designs for video.



        APS-C sized sensors were HUGE compared to a normal digital camera sensor when early DSLRs were introduced; the few early full frame DSLRs (think Kodak DCS) and their sensors were extremely expensive, probably because there was very little design experience in making economical sensors in that size.



        Image sensors are very coarse in actual structure compared to what CPUs or memory chips even in the 1990s used - for example, a common CPU for late 1990s desktop computers used 250nm feature size, which is quite smaller than what would even be physically useful on a visible-light imaging sensor. Today, 14nm (!!) is about state of the art.



        The necessity to avoid large die sizes per part, regardless of the structure sizes, as already explained in other posts, has not changed much.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 14 hours ago









        rackandbonemanrackandboneman

        3,8578 silver badges20 bronze badges




        3,8578 silver badges20 bronze badges















        • Beautiful answer, and more precisely explains the specific reasoning behind it for DSLR cameras as opposed to wafer lithography in general (as the other answers do). Have all the upvotes.

          – Doktor J
          7 hours ago

















        • Beautiful answer, and more precisely explains the specific reasoning behind it for DSLR cameras as opposed to wafer lithography in general (as the other answers do). Have all the upvotes.

          – Doktor J
          7 hours ago
















        Beautiful answer, and more precisely explains the specific reasoning behind it for DSLR cameras as opposed to wafer lithography in general (as the other answers do). Have all the upvotes.

        – Doktor J
        7 hours ago





        Beautiful answer, and more precisely explains the specific reasoning behind it for DSLR cameras as opposed to wafer lithography in general (as the other answers do). Have all the upvotes.

        – Doktor J
        7 hours ago











        2
















        Big sensors cost more than small sensors for more-or-less the same reason that big TVs cost more than small TVs. Compare a 30-inch TV and a 60-inch TV (about 75cm and 150cm, if you prefer). Miniaturization is no problem — we could make all of the parts of the 30-inch TV way smaller without running into any difficulty. The 30-inch TV costs less to make than the 60-inch TV because it uses less materials and requires less work to finish. And the 60-inch TV will have a higher defect rate — 4 times the area means much higher the chances that something goes wrong somewhere on the screen, creating a dead pixel. Because customers hate dead pixels, a panel that has more than one or two (or maybe even more than zero) gets scrapped, or sold as part of a lower-cost product. The production costs for defective units get rolled into the price of the acceptable units that are sold, so the bigger you go, the more expensive things get.



        The same considerations apply to image sensors. Even the smallest sensors on prosumer cameras have features that are huge compared to what semiconductor technology is capable of, so the cost of miniaturization isn't a major factor. Compact cameras and cell phones normally use far smaller sensors, and even budget phones normally have two cameras, with fancier ones having three or four! For reasonable sizes, smaller costs less, not more. The defect issue also comes into play. The bigger you make the sensor, the more likely you will have a defect that requires you to scrap the whole thing, and the more money (in materials) you will lose when you do scrap it. That drives cost up with size, dramatically beyond a certain point.



        The largest-format digital camera you can get as of this writing has a whopping 9"x11" sensor (that's more than 8 times the diagonal of a "full frame" sensor, or more than 64 times the area), and it only has 12 megapixels so obviously miniaturization isn't an issue — those pixels are huge. It retails for over $100,000.






        share|improve this answer































          2
















          Big sensors cost more than small sensors for more-or-less the same reason that big TVs cost more than small TVs. Compare a 30-inch TV and a 60-inch TV (about 75cm and 150cm, if you prefer). Miniaturization is no problem — we could make all of the parts of the 30-inch TV way smaller without running into any difficulty. The 30-inch TV costs less to make than the 60-inch TV because it uses less materials and requires less work to finish. And the 60-inch TV will have a higher defect rate — 4 times the area means much higher the chances that something goes wrong somewhere on the screen, creating a dead pixel. Because customers hate dead pixels, a panel that has more than one or two (or maybe even more than zero) gets scrapped, or sold as part of a lower-cost product. The production costs for defective units get rolled into the price of the acceptable units that are sold, so the bigger you go, the more expensive things get.



          The same considerations apply to image sensors. Even the smallest sensors on prosumer cameras have features that are huge compared to what semiconductor technology is capable of, so the cost of miniaturization isn't a major factor. Compact cameras and cell phones normally use far smaller sensors, and even budget phones normally have two cameras, with fancier ones having three or four! For reasonable sizes, smaller costs less, not more. The defect issue also comes into play. The bigger you make the sensor, the more likely you will have a defect that requires you to scrap the whole thing, and the more money (in materials) you will lose when you do scrap it. That drives cost up with size, dramatically beyond a certain point.



          The largest-format digital camera you can get as of this writing has a whopping 9"x11" sensor (that's more than 8 times the diagonal of a "full frame" sensor, or more than 64 times the area), and it only has 12 megapixels so obviously miniaturization isn't an issue — those pixels are huge. It retails for over $100,000.






          share|improve this answer





























            2














            2










            2









            Big sensors cost more than small sensors for more-or-less the same reason that big TVs cost more than small TVs. Compare a 30-inch TV and a 60-inch TV (about 75cm and 150cm, if you prefer). Miniaturization is no problem — we could make all of the parts of the 30-inch TV way smaller without running into any difficulty. The 30-inch TV costs less to make than the 60-inch TV because it uses less materials and requires less work to finish. And the 60-inch TV will have a higher defect rate — 4 times the area means much higher the chances that something goes wrong somewhere on the screen, creating a dead pixel. Because customers hate dead pixels, a panel that has more than one or two (or maybe even more than zero) gets scrapped, or sold as part of a lower-cost product. The production costs for defective units get rolled into the price of the acceptable units that are sold, so the bigger you go, the more expensive things get.



            The same considerations apply to image sensors. Even the smallest sensors on prosumer cameras have features that are huge compared to what semiconductor technology is capable of, so the cost of miniaturization isn't a major factor. Compact cameras and cell phones normally use far smaller sensors, and even budget phones normally have two cameras, with fancier ones having three or four! For reasonable sizes, smaller costs less, not more. The defect issue also comes into play. The bigger you make the sensor, the more likely you will have a defect that requires you to scrap the whole thing, and the more money (in materials) you will lose when you do scrap it. That drives cost up with size, dramatically beyond a certain point.



            The largest-format digital camera you can get as of this writing has a whopping 9"x11" sensor (that's more than 8 times the diagonal of a "full frame" sensor, or more than 64 times the area), and it only has 12 megapixels so obviously miniaturization isn't an issue — those pixels are huge. It retails for over $100,000.






            share|improve this answer















            Big sensors cost more than small sensors for more-or-less the same reason that big TVs cost more than small TVs. Compare a 30-inch TV and a 60-inch TV (about 75cm and 150cm, if you prefer). Miniaturization is no problem — we could make all of the parts of the 30-inch TV way smaller without running into any difficulty. The 30-inch TV costs less to make than the 60-inch TV because it uses less materials and requires less work to finish. And the 60-inch TV will have a higher defect rate — 4 times the area means much higher the chances that something goes wrong somewhere on the screen, creating a dead pixel. Because customers hate dead pixels, a panel that has more than one or two (or maybe even more than zero) gets scrapped, or sold as part of a lower-cost product. The production costs for defective units get rolled into the price of the acceptable units that are sold, so the bigger you go, the more expensive things get.



            The same considerations apply to image sensors. Even the smallest sensors on prosumer cameras have features that are huge compared to what semiconductor technology is capable of, so the cost of miniaturization isn't a major factor. Compact cameras and cell phones normally use far smaller sensors, and even budget phones normally have two cameras, with fancier ones having three or four! For reasonable sizes, smaller costs less, not more. The defect issue also comes into play. The bigger you make the sensor, the more likely you will have a defect that requires you to scrap the whole thing, and the more money (in materials) you will lose when you do scrap it. That drives cost up with size, dramatically beyond a certain point.



            The largest-format digital camera you can get as of this writing has a whopping 9"x11" sensor (that's more than 8 times the diagonal of a "full frame" sensor, or more than 64 times the area), and it only has 12 megapixels so obviously miniaturization isn't an issue — those pixels are huge. It retails for over $100,000.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited 7 hours ago

























            answered 7 hours ago









            hobbshobbs

            3231 silver badge7 bronze badges




            3231 silver badge7 bronze badges
























                0
















                Smaller sensors have higher production yields, and the electronics to process are lower cost.



                Double the sensor, and roughly square the processing power needed.



                The reality is that DX sensors are often higher resolution and greater dynamic range than films they are replacing.






                share|improve this answer





























                  0
















                  Smaller sensors have higher production yields, and the electronics to process are lower cost.



                  Double the sensor, and roughly square the processing power needed.



                  The reality is that DX sensors are often higher resolution and greater dynamic range than films they are replacing.






                  share|improve this answer



























                    0














                    0










                    0









                    Smaller sensors have higher production yields, and the electronics to process are lower cost.



                    Double the sensor, and roughly square the processing power needed.



                    The reality is that DX sensors are often higher resolution and greater dynamic range than films they are replacing.






                    share|improve this answer













                    Smaller sensors have higher production yields, and the electronics to process are lower cost.



                    Double the sensor, and roughly square the processing power needed.



                    The reality is that DX sensors are often higher resolution and greater dynamic range than films they are replacing.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 13 hours ago









                    mongomongo

                    1862 bronze badges




                    1862 bronze badges
























                        0
















                        Because you specifically asked about history...



                        I'd suggest: size, weight, & cost.



                        All those considerations were equally true in the pre-digital (ie film) days. A popular film format was the 110 size. See:
                        https://en.wikipedia.org/wiki/110_film



                        The 110 film was cheaper, the cameras were cheaper, and many of the cameras were a lot smaller and lighter than the smallest 35mm film compacts. They could fit very easily in a small pocket. Of course those same constraints exist today with digital cameras, as others have pointed out. So it's not just small and big image sensors today; it was also small and big film formats back then as well.






                        share|improve this answer








                        New contributor



                        Frank Van Hooft is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                        Check out our Code of Conduct.

























                          0
















                          Because you specifically asked about history...



                          I'd suggest: size, weight, & cost.



                          All those considerations were equally true in the pre-digital (ie film) days. A popular film format was the 110 size. See:
                          https://en.wikipedia.org/wiki/110_film



                          The 110 film was cheaper, the cameras were cheaper, and many of the cameras were a lot smaller and lighter than the smallest 35mm film compacts. They could fit very easily in a small pocket. Of course those same constraints exist today with digital cameras, as others have pointed out. So it's not just small and big image sensors today; it was also small and big film formats back then as well.






                          share|improve this answer








                          New contributor



                          Frank Van Hooft is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                          Check out our Code of Conduct.























                            0














                            0










                            0









                            Because you specifically asked about history...



                            I'd suggest: size, weight, & cost.



                            All those considerations were equally true in the pre-digital (ie film) days. A popular film format was the 110 size. See:
                            https://en.wikipedia.org/wiki/110_film



                            The 110 film was cheaper, the cameras were cheaper, and many of the cameras were a lot smaller and lighter than the smallest 35mm film compacts. They could fit very easily in a small pocket. Of course those same constraints exist today with digital cameras, as others have pointed out. So it's not just small and big image sensors today; it was also small and big film formats back then as well.






                            share|improve this answer








                            New contributor



                            Frank Van Hooft is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.









                            Because you specifically asked about history...



                            I'd suggest: size, weight, & cost.



                            All those considerations were equally true in the pre-digital (ie film) days. A popular film format was the 110 size. See:
                            https://en.wikipedia.org/wiki/110_film



                            The 110 film was cheaper, the cameras were cheaper, and many of the cameras were a lot smaller and lighter than the smallest 35mm film compacts. They could fit very easily in a small pocket. Of course those same constraints exist today with digital cameras, as others have pointed out. So it's not just small and big image sensors today; it was also small and big film formats back then as well.







                            share|improve this answer








                            New contributor



                            Frank Van Hooft is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.








                            share|improve this answer



                            share|improve this answer






                            New contributor



                            Frank Van Hooft is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.








                            answered 18 mins ago









                            Frank Van HooftFrank Van Hooft

                            1




                            1




                            New contributor



                            Frank Van Hooft is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.




                            New contributor




                            Frank Van Hooft is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                            Check out our Code of Conduct.

































                                draft saved

                                draft discarded















































                                Thanks for contributing an answer to Photography Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f110913%2fwhy-did-they-ever-make-smaller-than-full-frame-sensors%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Invision Community Contents History See also References External links Navigation menuProprietaryinvisioncommunity.comIPS Community ForumsIPS Community Forumsthis blog entry"License Changes, IP.Board 3.4, and the Future""Interview -- Matt Mecham of Ibforums""CEO Invision Power Board, Matt Mecham Is a Liar, Thief!"IPB License Explanation 1.3, 1.3.1, 2.0, and 2.1ArchivedSecurity Fixes, Updates And Enhancements For IPB 1.3.1Archived"New Demo Accounts - Invision Power Services"the original"New Default Skin"the original"Invision Power Board 3.0.0 and Applications Released"the original"Archived copy"the original"Perpetual licenses being done away with""Release Notes - Invision Power Services""Introducing: IPS Community Suite 4!"Invision Community Release Notes

                                Canceling a color specificationRandomly assigning color to Graphics3D objects?Default color for Filling in Mathematica 9Coloring specific elements of sets with a prime modified order in an array plotHow to pick a color differing significantly from the colors already in a given color list?Detection of the text colorColor numbers based on their valueCan color schemes for use with ColorData include opacity specification?My dynamic color schemes

                                Tom Holland Mục lục Đầu đời và giáo dục | Sự nghiệp | Cuộc sống cá nhân | Phim tham gia | Giải thưởng và đề cử | Chú thích | Liên kết ngoài | Trình đơn chuyển hướngProfile“Person Details for Thomas Stanley Holland, "England and Wales Birth Registration Index, 1837-2008" — FamilySearch.org”"Meet Tom Holland... the 16-year-old star of The Impossible""Schoolboy actor Tom Holland finds himself in Oscar contention for role in tsunami drama"“Naomi Watts on the Prince William and Harry's reaction to her film about the late Princess Diana”lưu trữ"Holland and Pflueger Are West End's Two New 'Billy Elliots'""I'm so envious of my son, the movie star! British writer Dominic Holland's spent 20 years trying to crack Hollywood - but he's been beaten to it by a very unlikely rival"“Richard and Margaret Povey of Jersey, Channel Islands, UK: Information about Thomas Stanley Holland”"Tom Holland to play Billy Elliot""New Billy Elliot leaving the garage"Billy Elliot the Musical - Tom Holland - Billy"A Tale of four Billys: Tom Holland""The Feel Good Factor""Thames Christian College schoolboys join Myleene Klass for The Feelgood Factor""Government launches £600,000 arts bursaries pilot""BILLY's Chapman, Holland, Gardner & Jackson-Keen Visit Prime Minister""Elton John 'blown away' by Billy Elliot fifth birthday" (video with John's interview and fragments of Holland's performance)"First News interviews Arrietty's Tom Holland"“33rd Critics' Circle Film Awards winners”“National Board of Review Current Awards”Bản gốc"Ron Howard Whaling Tale 'In The Heart Of The Sea' Casts Tom Holland"“'Spider-Man' Finds Tom Holland to Star as New Web-Slinger”lưu trữ“Captain America: Civil War (2016)”“Film Review: ‘Captain America: Civil War’”lưu trữ“‘Captain America: Civil War’ review: Choose your own avenger”lưu trữ“The Lost City of Z reviews”“Sony Pictures and Marvel Studios Find Their 'Spider-Man' Star and Director”“‘Mary Magdalene’, ‘Current War’ & ‘Wind River’ Get 2017 Release Dates From Weinstein”“Lionsgate Unleashing Daisy Ridley & Tom Holland Starrer ‘Chaos Walking’ In Cannes”“PTA's 'Master' Leads Chicago Film Critics Nominations, UPDATED: Houston and Indiana Critics Nominations”“Nominaciones Goya 2013 Telecinco Cinema – ENG”“Jameson Empire Film Awards: Martin Freeman wins best actor for performance in The Hobbit”“34th Annual Young Artist Awards”Bản gốc“Teen Choice Awards 2016—Captain America: Civil War Leads Second Wave of Nominations”“BAFTA Film Award Nominations: ‘La La Land’ Leads Race”“Saturn Awards Nominations 2017: 'Rogue One,' 'Walking Dead' Lead”Tom HollandTom HollandTom HollandTom Hollandmedia.gettyimages.comWorldCat Identities300279794no20130442900000 0004 0355 42791085670554170004732cb16706349t(data)XX5557367