For most individuals, images begins with a JPG, a pc file of compressed pixel knowledge, or, for video, an MP4 file.
The Web is awash with specialists who clarify make cinema-quality films from cameras or telephones bought at BestBuy. They don’t clarify that these picture recordsdata are simplifications, degraded variations, of what the digicam noticed. Lots of them suggest that one can get the identical outcomes from these compressed recordsdata that one might get from the unique sensor knowledge, which is known as RAW. In some circumstances, that is true. In others, false.
On this story, I hope to provide the instruments to know RAW sensor knowledge first-hand. Then you possibly can put these claims into perspective.
We’re going to take a naive strategy. After I first approached the topic of RAW, I argued with somebody on the Magic Lantern discussion board the place I insisted every pixel recorded a full shade knowledge level; he stated it solely recorded one shade. Happily, he was affected person and eventually corrected me of my misunderstanding.
I’ve created some instruments, that will have helped me, that will help you, work with RAW recordsdata. You’ll additionally get a style for the algorithms used to create usable photographs.
All these recordsdata will be present in my shared Google drive folder, here.
Let’s faux we’ve realized that Blackmagic has created a brand new sort of high-end shopper video digicam that shoots in RAW. That’s, it take 24 nonetheless photographs a second and saves it to a reminiscence card. (The DSLR we tackle trip can solely do 6 frames a second and just for just a few seconds, earlier than needing to “clear the buffer”).
Our spy has obtained a single body of a video (in DNG) format. Take into accout, sensor knowledge is just about the identical, out of your cellphone digicam to a $100,000 cinema digicam.
By way of some Python programming, we are able to extract the information right into a CSV format which we are able to load into excel. (Once more, all of this you are able to do with your individual RAW recordsdata utilizing my Python and VBA code!)
We load the information into Excel and create a header to indicate use the column (width) numbers.
Our proof tells us this picture has a width of three,820 pixels, and a peak of two,160 pixels. The smallest worth is 372 and the most important is 65,472.
If we scale these quantity right down to a variety between 1 and 256, which Excel can show as shades of grey, we get this
Now, I’ve a reasonably robust PC however it struggled within the processing of this knowledge. The workbook is 45 megabytes — for a single body of 4K RAW sensor knowledge. It took my PC seven seconds to create that 640 x 320 pixel drawing. It took 1:04 minutes to create a BMP file. After all, nobody works with RAW in Excel, however these occasions display the non-trivial calls for of RAW 4K photographs.
Now onto shade, for which we should, down a rabbit gap go.
Excel, which is our lab right here, (and nearly each different digital show know-how) offers with shade by breaking it up into primaries. For computer systems, it’s crimson, inexperienced and blue (RGB). Printing applied sciences break it up in cyan, magenta and yellow.
Why not simply use one quantity for every shade?
We don’t as a result of we don’t have the know-how to create or detect the 10+ million colours we understand. As a substitute, we use the identical tech behind the stained glass home windows from 100 A.D. I child you not.
Once you take a look at a picture on a display screen, your eye is mixing in three pinpoint lights of crimson, inexperienced and blue. That is achieved by putting electronically managed stained-glass filters over a white gentle. One thing like this:
The combos of depth, for every of the three major colours, provides us the notion of round 16 million colours. Once you see the colour yellow, a small yellow gentle will not be illuminating out of your show!
To create the notion of yellow, your laptop sends an electrical present to the crimson and inexperienced pixel filters which permits them to change into partially clear to the sunshine behind them. That gentle goes by means of the inexperienced and crimson pixel filters (stained glass), permitting them to brighten, which your eye combines into yellow (or every other shade wanted).
Briefly, we are able to solely create or detect photons (brightness). We can’t simply create or detect photons with particular wavelengths, or shade. Certain, we’ve devices that may do it with a single gentle supply, however not with Eight million small and separate gentle sources wanted to create digicam sensors or shows.
Coloration filters are the mom of all kludges in images and movie/video. The precise formulation of every producers filters are intently guarded secrets and techniques.
Okay, so it ought to come to no shock to you that no digicam you could have ever used sees shade. It solely sees photons. I’m not philosophically splitting hairs. Understanding the arbitrary nature of shade, as created by the filters manufactured, is central to understanding that the time period “color science” is an oxymoron. There’s solely shade opinion, however I digress.
Digicam sensors can no extra detect particular shade wavelengths than our show create them. They use filters too. They’re typically known as “bayer” filters after a person who invented them (although once more, these rules have been effectively understood for hundreds of years). Right here’s an illustration of a digicam sensor.
As a result of every digicam maker and show maker will use completely different methods to detect or render shade, they comply with a typical protocol of giving every pixel a full-color worth, one thing roughly between 1 and 16 million.
Right here’s the place issues get tough. Theoretically, we’d like three sensor pixels to create a full shade; that’s, a pixel with a crimson filter over it’ll want inexperienced and blue.
Due to this fact, we use the brightness of every pixel to set the brightness, and the colour of that pixel, plus two “borrowed” colours from neighboring pixels, to set the colour. In actual life, I’m certain all of the photographs you’ve taken look nice to you, even good. But when they’re on the full decision of the digicam, they solely have 30% of the colour appropriate, the remaining is guessed at from neighboring pixels.
You will have seen that colours will look a bit cleaner if you scale down a excessive decision picture. Now you recognize why. Should you’re averaging all of the values from four pixel blocks you’re getting a real shade at that decision. That’s, a 6 megapixel picture from a 24 megapixel digicam is as near true color-per pixel you could get.
Nonetheless, we’re extra delicate to sharpness, so on the whole, we would like increased decision even when some false colours are smeared over our picture.
Again to the information. I took a RAW knowledge file and transformed it to the quantity of sunshine every sensor pixel detected, which it recorded as a worth roughly between 1 and 65,000. As a result of our know-how is manufactured to indicate 256 ranges of brightness (matching our biology) I transcoded these numbers from a scale of 1 to 65,000, to 1 to 256.
The last word purpose of each photographer and filmmaker is to select precisely the 256 ranges of brightness they need for every shade element (RGB). As soon as they make that selection, they can not recreate these 65,000 values of relative nuance!
Beneath is the code I used to create a full-color picture from the RAW knowledge.
Right here is the information written to a bitmap file.