The carbon footprint for being lazy after our photo shoots: why it has to change

What this article is about

It is trendy – and more important it is necessary – to reduce our carbon footprint. Let’s calculate how much a bad habit of photographers can pollute.

I am taking in average 10 to 20 thousand images per year. Many pro photographers will shoot ten times more, typically above 100 thousand images per year, if not more. At the same time, for different reasons, it happens I am working hard at keeping only the best shots. Typically, from these 10-20 thousand photos per year, I am storing only 1 or 2 thousand per year. And there is even room for improvement. I don’t think this ratio is exceptional. Other people report a typical ratio of 80-95% of useless images, whatever the reasons. However, I must confess this eradication takes me a lot of time and I do understand why people don’t do it – it should be somewhat automated. I was wondering what the impact of keeping all these useless images is. How many greenhouses gas does it generate per year ? Basically, I am wondering how much our useless images can pollute when we don’t eradicate them.

How many tons of carbon dioxide per thousand of images stored?

Simple question, difficult answer. First and foremost, there are head and tail winds: Whereas 1 Gigabytes (GB) of data require less and less CO tons every year, images are becoming bigger and bigger as new sensors let you shoot with more megapixels. Same situation for videos. It looks quite challenging to anticipate the future trends but let’s make the calculation as per today, in 2019. It is reasonable to believe head and tailwinds will not completely change the result in the next years.

Let’s try to calculate just a rough estimate…

In this article, I don’t make any calculation for videos, just for the still images. I will consider 3 categories of photographers:

  • casual photographer who typically take 5 thousand images per year,
  • enthusiast (20 thousand images per year)
  • and pro photographer (100 thousand images per year).

Casual photographers only create JPG files from their photos in this exercise, with a 24 Mega pixels camera. So, each JPG file weights typically 5 Mega Bytes (MB) each. This means 5 x 5’000 = 25 GB per year.

Enthusiasts shoot RAW, with 36 Mega pixels camera. They convert 10% in JPG, of 7.5 MB each. This means 36 . 20’000 + 7.5 . 2000 = 735 GB per year

Pros will shoot both RAW and JPG, with different cameras and sensor. Let’s make a rough estimate at 15 MB per image. This means basically 1.5 TB per year.

To summarize, I will just consider 1 TB per year per photographer. This will simplify the calculation. It will not change the whole result and it will be consistent with the kind of photographer we are looking at for this effect (mostly enthusiasts or pros).

All these numbers are arguable but that’s a good starting point for a first estimation.

Now the key question is how much carbon dioxide emissions for 1 TB ?

Several studies have proven that we need around 100 kg of Carbon dioxide emissions to store  1 TB of data on the cloud (ref. [1], [2] and [3]. Again, the calculation is quite complicated, and the range is very broad, from typically 50 kg to 2 tons. I am considering 100 kg as a conservative estimation.

This means 1 ton per year for 10 TB, after 10 years of photography as it is cumulative.

What does it mean in a sustainable world?

In a sustainable world, the average individual rate should be of 3 tons of carbon dioxide per year (ref. [4]). We are far from that level now (US: 18-20 tons per year per person, China: 6.5 tons, …) but that’s where we are going.

It is useless to say we can’t use almost 1/3rd of our yearly quota (in a sustainable planet) just for storing images. It should not be more than a couple of percents. Once again, it proves that a sustainable world will have dramatic consequences to our life. It means we should eradicate all our useless images as they represent 80-95% of this storage emission.


It is time to reduce our data from images and videos. Besides storing  too much and mostly useless information, it is necessary for living in a sustainable planet. Of course, one can object these data “might” be useful in the future, who knows ? At the same time, it is good practice to focus at what really matters and be able to retrieve this important information later when needed. Less is sometimes better. And we always find good excuses to refuse change. But this change is needed and in the long run, inevitable. It is time to be consistent and eradicate as a “pre-post processing step” most of the useless images, whatever useless may mean.


[1] – Carbon and the cloud, Stanford Magazine

[2] – Trends in Server Efficiency and Power Usage in Data Centers, SPEC 2019

[3] – The carbon footprint of a distributed cloud storage, Cubbit

[4] – Stopping Climate Change: A Practical Plan 3 Tons Carbon Dioxide Per Person Per Year, Ecocivilization

They are no rules for good photographs, but they are rules for poor photographs

A "good" image for some, but no rules can apply and some will not even like this image

As DPReview’s Nigel Danson reminds us, and to quote Ansel Adams: “There are no rules for good photographs. There are just good photographs”.

They are no rule for good photographs, fair enough, but I am convinced they are rules to define and detect the poor ones, whatever poor may mean for the photographer. In a digital world, we can take really a lot of pictures. I shoot 10’000-20’000 photos per year (a pro can shoot over 100’000 per year). I don’t use more than 1’000 of them. I like to believe it is important to delete most of them, just to make my life simpler when I will start the post process steps and when I look back at my images, for search or for other reasons.

Less is more?

Taking a lot of picture is not always bad habit, but at the end of the day, we all must cope with this huge and useless number of poor pictures. Therefore, it seems important to define some tangible rules that one can apply manually or through software to eliminate the wrong ones as early as possible in the workflow. Ideally, this should be done at “run time” during the shot itself, which is certainly possible if images are uploaded real time to the cloud and analyzed right away .

But to be more concrete, let’s say there is a need to detect and delete (non exhaustively):

  • Images poorly exposed,
  • Motion blur (not on purpose) and focus blur (not on purpose as well),
  • Useless duplicates (and whatever it may mean).

Many photographers may claim there is no way to detect poor images programmatically due to the non-deterministic nature of art. For instance, histogram might be not enough to detect poorly exposed image. At the same time, it will be difficult to convince me that when a photographer fails to take the photo like wanted, it is worth being kept as soon as one believe there is quality standard to comply with when it comes to art. It is also about being disciplined and mastering what we are doing. So, it may not be acceptable to continue working on images for which we wrongly set up too high ISOs, too slow speed or with the main subject not in focus like we wanted.

It is simple, but not easy

As a conclusion, I tend to disagree about the impossibility to detect poor images by a software. And it is certainly possible to detect poor images automatically and get rid of them. It will not be the same for any photographer, everyone might have to set up the quality level acceptable in terms of exposure, acutance and duplication.

It may be very difficult to delete all the poor images but fine tuning the parameters and the algorithms so that we get rid of most of the non interesting ones would be more than good practice. It would save time and let the photographer focus at what really matters: the good photographs, for which there is indeed no rules.

High Iso: how far a photographer should go?

Shooting at high iso is a topic highly commented. Some believe it is certainly overstated. And indeed, it is far from being so important in photography. At the same time, we should know the limits: how dark can it be? How far can we go at high ISO when shutter speed is critical? Not from a purely technical perspective but to stay consistent with our overall artistic approach. Some photographers may shoot only at very low light but that’s unusual. Most of us are just shooting at high ISO and at the same time also at lower ISO value. So, high ISO noise level is just a constraint we need to deal with.

The problem is to know for each of our camera we own the ISO limit at which we should shoot. Indeed, too high noise or too much underexposure leads to unacceptable image quality. The usual approach is purely empirical: when you believe the noise level has become unacceptable, you just don’t shoot at this value or above.

The problem with this approach is twofold: it can be biased, there is no tangible comparison until you use a scientific measurement of noise and it does not consider the fact that you may shoot with different kind of cameras (from smartphone to drones, DSLR, Full frame or small sensors). However, regardless of sensors, a photographer should keep consistency from a noise-to-signal ratio between the cameras (assuming noise-to-signal a proxy of noise level). Nobody cares which camera you have used for a photoshoot. But all images should be delivered with a similar if not equal quality level irrespective of the sensor.

I have started to measure SNR (Signal-to-Noise Ratio, proxy of noise level) for a given camera at different iso levels. The process is simple:

  1. Take photos at different ISO with the same histogram of the same object or landscape (no over or under exposure between images) but with different cameras. Images must be as similar as possible.
  2. Define a limit above which you believe noise is too high for your best sensor (Nikon D750 in my case).
  3. Define the ISO limit for each camera for the same value of SNR to ensure consistency in the quality

Results on the graph below:

Based on this method, I concluded I can shoot up to 12’800 ISO with my D750 maximum and if possible, not above 6’400. It is not that the image is not acceptable above, it is just to be sure shooting at high ISO has no significant impact on image quality according to my own standard. The SNR is indeed stable up to 6’400 ISO.

But with the Nikon D7000, an older APS-C camera, it is no more than 1’600 ISO. With my APS-C mirrorless Fuji X100s, it is 3’200 ISO (due to a more recent sensor). And with my compact Panasonic LX100, no more than… 400 ISO.

This has been a surprise. Indeed, I used to shoot way above 400 ISO with my compact but indeed, at a closer look, it is not without consequences on the image quality.

It is also proving how some sensors are just much better because their SNR stays stable (D750 or LX100) before dropping at very high iso while other are decreasing steadily (like X100s or D7000). with the former, you just shoot at whatever ISO you want below a given limit whereas with the later, you try to keep ISO as low as possible every time.

Please contact me if you want to know more about this approach and how to shoot at high iso without image quality loss.

RAW images are finally supported by Microsoft explorer

(About the new Microsoft Raw Image Extension on Windows 10)

Until recently, the Microsoft explorer strategy with regards to RAW files has been “not very consistent” – to say the least. For instance, it has indeed been possible for a while to display in the explorer thumbnails of Canon and Nikon raw files (.CR2 and .NEF files) but most of the other proprietary files could not be displayed. It was mandatory to use a 3rd party viewer. There are quite a few, free, and which works well. But using another tool when all you need is to browse quickly thumbnails, or copy/paste files is clearly overkilled.

Early January 2019, Microsoft has released a new application for Windows 10 to fix this well-known issue. Basically, it adds native viewing support for RAW format images: Microsoft Raw Image Extension.

RAW and JPG, different formats, same experience?

It is now possible to view the thumbnails of quite a few RAW formats as any JPG image. If you are using Microsoft Photos (this is still not a very mature application but, let’s be honest, improving year after year), you can similarly have a great snapshot of your RAW image full screen. Again, you can get this with 3rd party viewer but it makes our life easier to have it well integrated like for JPG files. Similarly, you have access directly to the basic EXIF metadata of the RAW files when rolling the mouse over the thumbnail of the RAW image, like for any JPG.

Which file formats are supported?

This Microsoft application is based on LibRaw, a quite well know open source library when it comes to RAW files management. So, in theory, it should support most of RAW files format from most cameras.

I have tried with RAW files from Sony (.ARW), Nikon (.NEF), Canon (.CR2), Panasonic (.RW2) and Fuji (.RAF) and everything is fine at first glance.

How to use it?

First, you need to know this application is not available yet on the official Windows 10 latest release. You need to join the Windows 10 Insider Preview, and then download the latest build available from the Windows 10 settings menu (search for Windows 10 update). Finally, you can download and install from the Microsoft store the application itself (Raw Image Extension).

If you don’t want to do this, you will need to wait a few weeks or months, but the application should be available in 2019 anyway. Either be patient or be bold.

For the software developers

So far, developers must use specific library to read and convert RAW files (For example, Image magick). Thanks to the new application, it is possible to re-use the Shell files if you need for instance to display gallery of raw images, like for any JPG file. This is nice as it will improve performances of viewer displaying raw files. It is no more needed to convert the RAW in JPG, this has been done by Windows 10.


Be aware the below limitations might change as I am not using the final version of Windows 10 to be released with the RAW image extension.

(Tests done the 25/03/2019 on W10 build 18362.1)

  1. Extra Large Icon in the explorer is far from being extra-large for any 4K screen. They are like they use to be… which means they are ridiculously small for 2019 monitors.
“Extra Large icons” on a 4K screen
  • LibRaw is not yet supporting Canon latest RAW format .CR3, so will be the case for the application. It probably be in the future, but it is yet not even being developed at the moment, as far as I can check.
  • One more thing: for developers, there is no change in the Extra-Large Thumbnails size from the shell file. They are still no bigger than 1024 width – not exactly extra-large by 2019 standards.


From the tests done on the latest Windows 10 build, Microsoft is still not at the level one can expect when it comes to photographers’ main features, but it is improving maybe not quickly but at least steadily with the 2018 improvements of the Microsoft Photo application and this new Raw image extension application to be released in 2019.

A quick overview of the challenges when we want to store our images or videos

The problem has been discussed plenty of time, but I  am working at keeping it simple. Basically, it is about:

  • How to organize your files
  • How to store them
  • How to be sure your back-up strategy is resilient to different risks

How to organize your files

There are quite a few articles or blog about the topic (this one is recommended, translate it in English if you need), but I like to put it down to something quite simple:

  • We need to have folders even if we use tags and metadata. Folders should be equal to a photoshoot and it is important to come back to what really happened the way it happened. Tags will make you losing sight of this context. They are good for search and retrieval. Any picture is always part of a photoshoot. This should not be forgotten.
  • There are different ways to organize folders but again a good principle would be per year. And in each year, per event of main category. Or by month if you really shoot a lot. Anyway, you got the point.

Smartphone complexity

The device has become for many the only camera they use. For any photographer, it can be useful as well. The way the operating system stores the data is mostly hidden to let the user browse the images in a different way they are stored. This sounds like a fair principle, focusing at the user experience – it is trying to hide some technical complexity, but it is bringing it back in a somewhat unpleasant way as you need to understand where and how the actual files are stored and manage them accordingly like any other digital asset. It is what it is, spend time and learn how to manage the folders and the images in your smartphone(s) like any other device (for Android, some information available here).

How to store them

Again, this is a topic discussed many times (this article looks like a good introduction to this topic). I would nevertheless consider the different options:

  • Hard drive of course
  • Back-up Hard drive (external or internal, or both)
  • Cloud back-up
  • Back-up 2nd computer for clouds data

About cloud provider, be sure they store the images at their original quality. It is a back-up solution, so you need to have the original stored. Read also how they use the data and where (which country) the data are stored. Have a look, typically annually, at the company behind. This is a quick check to be sure you are working with the right organization for your needs, and which can propose some long-term safety for them. You don’t want to change provider every couple of years.

Main risks

The most basic one, still ignore by many, is the hardware failure, the hard drive. The point is not whether it has become very rare with SSD or not, it is by nature something which may happen anytime. It is a risk to be considered.

Another risk, quite a painful one, is certainly to have your computer stolen, and same for your back-up drive. Really unpleasant when both are stolen at the same time as being at the same location. If you have a look at statistics, you will discover that this event if far from being unlikely considering you need to evaluate it for a life time.

Fire / flooding or other natural event look very unlikely for many but again, over your whole life, the probability to face such an event is certainly non-equal to zero even if the odds remain in your favour. But there is no reason not to be protected from this risk as well.

The last main risk would be to have your password of your cloud provider stolen, provided you don’t have double authentication or/and your device have at the same time used to delete your whole data set. Unlikely, but not impossible.

Risks management

Below a basic summary:

 Risk vs. storage solution. “N” means you are not protected against this risk. Hard Drive failure Natural disaster in your home Hardware stolen Password stolen Major natural disaster
Hard Drive N N N Y/N N
Back up Hard Drive,  external Y Y Y Y N
Back up Cloud provider Y Y Y/N N Y


In a world of digital data, I would not underestimate the risks and at the same time, it is important to keep things simple. So, I have my own strategy to be protected – as far as I can estimate them – against any threat:

  • Any digital asset is saved on my hard drive of my desktop with an auto-sync back-up with a “main stream” cloud provider (Microsoft, Amazon, Google or Apple).
  • I am saving on a yearly basis the data on an external drive that I stored in a different place (someone of my family keep it and as we meet every Christmas, it is easy to remember I need to bring the updated data)
  • I have another back-up on my laptop, auto-sync thanks to the cloud provider.

This means I have at least 4 data set stored in at least 3 different locations whereas all I need to do is a manual yearly back-up. Easy to manage. And I feel safe.

About Clipping blacks and blowing highlights: an attempt to bring together art, science and discipline


Clipping in photography is well known and whereas sometimes done on purpose, it mostly comes as a non-desirable effect, because of poor exposure (worst case) or at least reaching the limits of the sensor range (best case). By clipping, I mean both blowing highlights and clipping blacks. The topic has been debated countless times in different forums and blogs.

As a summary, some people believe it does not really matter as long as the photo is great and other advocate why and how to avoid it. Other rightfully point out it is sometimes better not to fix it whereas other explain in detail how to do it the right way.

This is a classic case of different opinions in photography between those who do not want to consider something else than the purely artistic result and the scientists obsessed by being consistent with some physical principles. As usual too, both are right and wrong at the same time. Indeed, what matters in photography is the result, the emotions a photography can carry, and whether you like it. Period. Clipping, no clipping, who cares. At the same time, it is true to say that blowing your sensor which can no longer deliver any information but “I am blown” (white burned) or “I am blind” (black clipped) is not really what someone can call good practise, to say the least.

I am trying in this post to find a way to make all these opinions somewhat aligned, in a very much Swiss-like consensus way.

How to detect it and how to fix it

There are also plenty of information about the topic. I would recommend reading:

[1] How to Avoid Burned-Out Highlights
[2] Stop Doing This to Your Photo’s Highlights
[3] What is Clipping in Photography and How to Fix It!
[4] Restore Those Clipped Channels
[5] 6 Ways to Reduce Blown Out Highlights in Your Outdoor Photography
[6] Highlight Clipping in Adobe Photoshop Camera Raw (and Why You Should Care)¨
[7] What Is Clipping and How To Fix It
[8] Blowing Highlights And Clipping Blacks: The Rule Behind Lost Details

“Physical” and “visual” clipping

Most of the people know well the “physical” clipping: when the sensor is blown. Technically speaking, it means the pixels of a given channel (R, G, B) or its luminance (Based on the square root of R, G, B weighted according to the human eye characteristics) is at its maximum value (typically 255 for 8 bits JPG) or its minimum (0).

But it is also important to remember that what matters is the “visual” clipping: the pixels that are almost blown or clipped also matter because (at least for JPG images), there can be no way to really fix them properly and get information from the clipped regions of an image.

Example: a JPG image of a very high contrast scene.

Let’s have a look at the clipped pixels highlighted in blue for the blacks and red for the highlights in the image below. First, one could argue that using JPG for such conditions is not the best idea, RAW would have been by far a better choice but without going to start another countless debate RAW vs. JPG, the image has been poorly exposed as there is no clipped pixels in black (they would be in coloured in blue in the image below) whereas they are quite a few blown ones (in red below). So, basically, it says the image should have been significantly less exposed..

Same image with highlights in red, blacks in blue (none in this case)

But whereas the number of actuals blown pixels in red is not so significant, the number of visually clipped pixels is at an unacceptable level. It is making the image ugly whereas it was an interesting one. There are almost blown from a physical perspective, but for our eye, they are just blown… you can try to reduce highlights or exposure, there is basically no information recorded into the sunny mountains part of the photo. The image will stay poor. So, what matters is not the truly clipped pixels but those which look clipped. Using Lightroom or other software tool is not enough even if, again, you can’t do much to fix it when you shoot JPG. That’s a good transition to the next point.

Clipping is not the same animal when you shoot RAW or JPG

I believe it does make sense to differentiate JPG from RAW images when it comes to clipping. For RAW images, with modern sensors, clipping images is rare. Or you really do it on purpose. Or you have no idea how to use your camera’s exposure systems! The below example shows how tolerant sensors are now to clipping:

Very high contrast image with my son in an hotel room, completely in the shade. The skyline behind him is of course much brighter. This is really an extreme case and with a good but really mainstream full-frame sensor (Nikon D750 in this case), there is almost no clipping shooting RAW.

I know it is not so simple and you can clip some parts of a photo despite your goodwill and expertise whilst shooting RAW. My point, however, is to say it is rarely a problem and it is easy to identify and to anticipate as it will only concern extremely high contrasted images.

When it comes to JPG, this is a totally different story. It can be easy to clip parts of an image and it can be difficult to fix it as we have seen above. What matters is first to know quite well how to detect that the image will have some clipping. Second, you need to know whether it is a problem for your image or it is not. There is no good answer to this (from my perspective, though, it will very often be a problem). One approach would be of course to shoot RAW anytime there is a risk of clipping, just to have more latitude in the process, but it is not always possible or desirable. At least you know what to do. So, it looks important to understand the causes and the consequences of clipping and how RAW can fix it while bringing the usual inconvenient of shooting RAW (processing time, file size, buffer limits, …). If you don’t shoot RAW, you normally have reasons for this choice. This is a good transition to the next point: this is where good and bad clipping matters as well in your decision.

The Good clipping and the bad clipping

The bad clipping is the one you should not get. Just expose better your image by underexposing it when you have bright parts or underexposing it when you have potentially too many black clipped pixels.

The good clipping is just inevitable. Below an example:

When we analyse the image below, we can see we have both blacks clipped and highlights burnt. In red the burned pixels, in pink the “visually” clipped ones. In dark green the black clipped ones and in light green the “visually” clipped.

Having significant both red and green zone just say you are going beyond the capabilities of your sensor. Just buy a better one with a higher dynamic range… or use an artificial way (flash, umbrella, filters …) to decrease the contrast, which of course is not always possible or desirable depending on the kind of pictures you are shooting.

Conversely, if you aim at having high (or low) keys images, the result will be clipped, fair enough but the pre-processed images – before you start to work at them, the RAW images or the JPG out of the camera should not be clipped. And to illustrate this, a cute gallery that I like of high keys-on-purpose images:

Gallery on Flickr of white and high key images

I like these images but I would not bet they were clipped out of the camera.

Good principle: clipping is bad

Long story short, it will be difficult to convince me clipping is not bad. Indeed, if you are looking to shoot high or low keys image or if you want to stylize your images, that’s more a post process thing. If you know what to do, you can argue “I clip on purpose” but most of the time, clipping is just bad. Your sensor doesn’t provide information any longer but a very black and white approach of the reality. What you will do in the post process is a different discussion, when you shoot, and you anticipate clipping, unless knowing exactly why, you should just take whatever it takes to limit it (thanks to under/over exposing or bracketing) or avoid it (same actions + RAW + stacking/HDR).

Conclusion and summary

Let’s start by another example. From my perspective, this image below is poorly exposed, over clipped in a white grey ugly sky:

The city of Mopti, Mali

The light was terrible, due to some haze caused by hot air. This image looks ugly to me whereas Mopti is such a dramatic city and I have tried to post process it, there was no way to fix it (I was travelling, was short on time, and I did not see a way to avoid clipping). Light is bad, it is what it is.

My point: clipping is (very often) bad even if you can’t avoid it. They may be some counter examples (try to shoot an image of a polar bear in the artic without clipping the snow…) but they demand to have at least understood how to produce a pleasant image and taking counter measure to reduce the visual impact (shooting RAW, shot only when there are some shadows to produce some darker zones, …).

Tokyo Japan, RAW image underexposed by 1.5 EV, no final clipping whilst the original image before post-process looked challenging with both under (the sphere) and over exposed parts (backlight windows).

When you always shoot the same kind of picture, you know what you are doing. You don’t really need the following conclusion as you have no problem to deliver images you are familiar with. But it is also good in life to try new things. And it is good to remember some good principles because when you shoot new subjects, in a new way, in new places, you will have many reasons to fail delivering great images. It is good to remember some basic principles. Beyond all the discussions and remarks, I like to remember something easy not to forget:

Shoot whatever you like, but clipping is bad.

The more you know how to detect it, avoid it or at least manage it, the better. It is not a fight between art and science, it is about discipline.

Ultra wide angle: no silver bullet for full-frame cameras but a hell of a choice

Over the last years, beside the usual lenses proposed by the main cameras’ manufacturers,quite a few independent lens makers have developed a very interesting and complementary offer when it comes to ultra-wide lens (21 mm, or less, for full-frame sensors). So, if you are looking for such a lens, you have quite a few options whatever your camera may be.

I am not considering in this article any fish eye, but only rectilinear ultra-wide angle, and only for Full-frame sensors, not APS-C even if the rationale looks the same for them.

That said, Nikon and Canon DSLR mounts are going to be a kind of obsolete with the raise of their mirrorless products line, long awaited, and so will be most of their lenses. But for now, we still need to live with the “old” DSLR mounts for Canon and Nikon as most of their lenses are not yet available for the mirrorless bodies. Sony users certainly have an advantage from that perspective.

The goal ofthis article is certainly not to be another one reviewing lenses or being exhaustive but more to highlight a new world we are living in, with a lot of choices. To choose is to sacrifice and I wanted to focus on a few important questions and what it means for the lens’ choice.

As expected, I can’t say there is a silver bullet, depending on the purpose of your lens. What really matters to you? When it comes to define what is really needed, the list is becoming a little bit long due to the possibilities offer by the different manufacturers:

  • Why do you need an ultra-wide lens?
  • Do you really need a zoom, or a prime lens will do the job?
  • How important is the maximization of the field angle? (I mean is 20 mm enough or the widest is still not wide enough)
  • Do you need a front filter?
  • How much important the weight will be?
  • How much important the size will be?
  • Are you on a budget or is it a detail?
  • If you need a zoom, do you really need to go up to 35 mm? (beyond wide angle)
  • Do you need image stabilization?
  • How important a fast lens will be?
  • Do you need weather sealed lens?
  • Mechanics must be built like a tank or plastic is fine?
  • Are you really caring of lenses sharpness? (most of the time too much considered for the usage we do)

Depending on your answers, the choice will narrow done dramatically.

You can find a lot of tests and advice on the topics, below some good links:

Techradar: the best wide-angle lenses for Canon and Nikon DSLRs in 2018

Lenstip: quite extensive in the list, but not in the tests’ depth and details

Ken Rockwell: ultra-ultra wide lenses (Nikon only), and Nikon Ultrawide FX Zooms (actually not only the zooms, but Nikon Only)

DXO Mark (chose first zoom1-35mm, then primebelow or equal to 21 mm)

Optical Limits (formerly known as Photozone): very extensive as usual for both the zooms and the primes.

But let me share with you my opinion on most of them to complement the usual tests:

First and foremost, we can’t get all at the same time, again we can’t have a silver bullet (at least there is none so far, or please let me know):

  • I fyou need a zoom, I would say it should be up to 35mm, it will be heavy, and not so fast. Why 35mm? Because your zoom will be able to both shoot as an ultra-wide angle and as a standard lens. E.g., zoom which goes up to “only” 24mm are useful but not that much from my experience and by design. 24 mm is still wide angle and most of the time, when you need wide angle, you need very wide angle… So, you will occasionally if not rarely shoot at 24 mm with such a zoom. 35 mm is different because you can start to use your zoom not as a wide angle one. Above 35mm, it is very rare to find such a zoom being at the same time able to provide a very wide-angle focus.
  • If you need a light lens, or a compact lens, it is very likely to be a prime. This is obvious.
  • If you need a front filter, you may have to forget the shortest focal lens (typically 16 mm or under). This question can become emotional for many, but again, be clear on your needs.
  • If you don’t really need AF, and that’s likely with such lenses, it will broaden the choices with very interesting options.
  • If you need a lens not so sharp or not so fast, what is the point to have a DSLR?  With the raise of computational photography and the progresses done by smartphones, you should really be demanding on your full frame lenses.

They may not be my favorite choices for shooting ultra-wide but below a few exotic lenses that I liked for stepping out of the crowd:

Irix 11 mm: really an ultra-wide-angle lens, non-AF (not really a problem most of the time for this focal), solid, heavy, aiming at providing sharp images.

Tokina 17-35mm f/4 PRO FX: if you are on a budget but do need an ultra-wide zoom. No image stabilization, AF motor not silent but certainly a decent lens for a bargain price compare with the main brands.

Sigma20mm f/1.4 DG ASM “ART”: great lens, expensive, bulky, heavy, no weather sealing but excellent quality and super-fast. So this lens is a match you need 2/3 EV faster at 20 mm (not sure it worth it for most cases but if you need a f/1.4 ultra-wide angle, you have a winner I am not sure you have even another one to compare which can be as open as f/1.4)

The three pillars of photography

3 pillarsI have written several times that technical innovation can be either a way to foster your creativity or could be, most of the time, a useless distraction. I don’t say I am opposed to innovation, that’s more the other way around of course, but I like to believe one should always remember the basic:

1. Subjects’ choice

Whatever the technology and the gear, and even if you know how to post process well images, you need to be creative and to have the artistic skills if you want to create “great” images. That’s not the bottom line, in my humble opinion, but I like to believe it starts here: learn to be creative, be yourself and express yourself.

2. Shooting skills

Some photographers have “the eye”, most have not. You can’t hardly learn that. Some know how to compose and when to shoot.

3. Post process and technology

Yes, never be overwhelmed by them, they are nothing but tools useful for the artist, but it sounds more important than never to know everything about the photography’s technology and how to post process images.

At the end of the day, photographers who excels at the three pillars of photography are usually admired or, at least, can produce amazing pictures. Know where you are, and in which topics you need to improve yourself!

No more reason left for buying an APS-C DSLR?

A few years ago, the main reason to buy and use DSLR where mainly the following:

Better images quality

Optical view finder
Great control of the depth of field (thanks to bigger sensor and faster lenses)
Better controls and ergonomics
Faster AF and many images per second
Access to a full photographic system of lenses, flashes and other accessories

Nowadays, thanks to enhanced sensors, mirrorless cameras, miniaturization, specialized cameras for every kind of photographers, most of these reasons have become less and less true. Of course it still makes sense for some professional photographers, of course I and many amateurs still prefer shooting with a DSLR but as a matter of fact, I like to say the main reason to still go for such a camera is: “I don’t want to compromise”. “I accept to pay a lot, to carry a heavy bunch of gear, to have several bodies and many lenses and accessories, because I will also accept to spend hours on post processes of my images. All I want is the best gear available to let me have the pictures like I want to”.

comparison full frame APS CThis means an  APS-C sized sensor (DX for Nikon) DSLR does not make sense any longer but for those who never tried beforeap one (like a Nikon D3200 which costs less, by far, than a mirrorless whereas able to make really excellent images). Frankly – I own a Nikon D7000, I don’t see the point having nowadays a DX DSLR when you are an “experienced” photographer. Full frame (FX for Nikon) cameras are becoming really affordable and are somewhere, the only consistent option for a DSLR given the present competition of great mirrorless cameras and excellent compact cameras. Back to before 2000, at this time, only such cameras existed!

It does not mean the manufacturers will stop developing lenses for DX, nor will stop releasing new cameras (Nikon refreshed its D7000 recently by the D7100), I just don’t recommend investing into a DX DSLR system. If you have one – like me – you can still use it a secondary system, or because it still works very well but the DX time, basically, sounds to be over to me. Buy FX DSLR cameras if you want no compromise, and middle format should you be able to afford it. Again, if you are on budget, a DX DSLR could be your first DSLR, but it will just be “temporary”. And don’t forget mirrorless, compacts and smartphone as complementary but “mandatory” cameras.

Nowadays, it does not really make sense to own just one camera. And certainly not a DX DSLR!

Further reading:
Why DX has no future
Full frame war
Full frame goes mainstream

Digital photography in 2013: what can come out from the end of a revolution

The digital revolution may have began around 1999 or 2000 with the first real DSLR of Nikon and Canon. Almost 15 years later, evolution will continue, every quarter great cameras, software, or new web services are released, but I am more and more believing it is the end of the digital revolution. And that’s good news for photography because we may be able to focus again at what really matters: the picture, not the technology.


Cheap point and shoot cameras and smartphones are making everyone a photographer. Modern sensors and skilled engineers allow everyone to take very decent shots, should they have no knowledge of photography. Digital filters and photosharing make the pictures looking even better and available right away for those who matters to everyone. Anonymous can become very famous thanks to Instagram, much more than many legendary photography. So what? That’s fine, just the consequences of the modern digital revolution. It is time to learn living with it.


We have learned HDR, digital filters, advanced post processing, and much more during the last years. We can now have a small camera with a x40 zoom for less than a fraction of the price of a whole set of lenses we used to need ten years ago. Or a mirrorless, or a tiny compact taking better pictures that DSLR a few years ago. We can store and share on line so easily nowadays. Much more will come, of course, and we will have to adapt. But I am wondering whether most of the breakthrough might not be behind us. And that’s also good news. Revolutions are exciting but they distract us, when they don’t exhaust us. A necessary evil, but still an evil.

Here above an example of how my pictures have evolved in 20 years while mountaineering! Is it better or worst? It does not matter, things have changed, and dramatically to say the least.

No revolution lasts forever

Mirrorless did not change anything to this revolution even if they are great cameras and improved the revenues of major vendors. I like to say they rang the bell: this is the end. We are entering a new era. Despite being a major innovation, it does not change so much the game. And I doubt that Lytro would bring anything significant too by the way.

Same for Google+ and Facebook recent photosharing improvements. Photosharing is becoming a commodity nowadays. It may be good for every one, but it won’t change the game.

The bottom line

We are getting bored with the revolution. We can now focus again at what really matters: taking pictures. We don’t have to spend weeks testing the new stuff, we have to spend weeks focusing at creativity, photography and what we want to show to the others. It’s no more about software and hardware, it’s about life and creativity.

Many photographers have never stopped working this way, fair enough, but I like to believe they were really lost in the turmoil of this revolution. The dust is settling done, so I want to see in 2013 the new raise of great photographers, not those showing HDR on Flickr, their meal on instagram, or selfie on Facebook but those who have something to say.

Every revolution evaporates and leaves behind only the slime of a new bureaucracy – Franz Kafka