The missing cameras – Episode 2

Photo by Alexander Lam on Unsplash

Some years ago, in 2012, I have identified several kinds of cameras which were not existing, whilst there is certainly needs for them. I am not saying manufacturers should design and sell them, I have no idea whereas they can make money out of them, and I understand easily they prefer to focus on other topics more valuable to their business. But as a photographer, I do know I missed them:

  • Wide-angle compact (16-35 mm Eq. Full frame): still missing. Nikon almost did it but finally, the project has been cancelled.
  • RAW capabilities and more than a point and shoot underwater camera: partially, yes as RAW is now frequently available. Most of the cameras are basically point and shoot but let’s say this category is now existing.
  • Viewfinder on small compacts: yes, absolutely. Sony RX 100 of course. And a few more.

The idea is not to dream of some impossible compromises like a “16-500 mm f/2.8 zoom weighting less than 500 g, sharp like Hell, and not expensive at all”. But these missing cameras are certainly not impossible to design and manufacture.

And Let me add a couple more:

  • Very light drones, under 250 g which can shoot RAW. Basically, drones for photographers… DJI almost did it but finally no DNG files are available on their Mavic Mini.
  • Full frame compacts ruggedized. Yes, I know, a niche market but such a compact, a mirrorless or whatever camera with a “big” sensor (at least APS-C) would definitively make sense. I am not talking about weather sealed bodies; you always wonder whether you are going to destroy your camera when it rains with these ones but a real certified one capable of going underwater. Maybe not for scuba diving, this is specific needs and underwater housing exist for a reason, but more a travel body able to follow you when you swim, snorkel, and shoot under a monsoon rainy day.

With the missing wide angle compact, that’s another 3 missing cameras for 2020 ! And you, what kind of camera do you want that the market does not provide, yet ?

Featured image: Photo by Alexander Lam

How to upload your photos from your digital camera like your smartphone

Introduction

It is so easy to use your images from your smartphone cameras. Many photographers can’t understand why they still need to use SD card, or connect the camera through Bluetooth or Wi-Fi, and download them to a hard drive, whether a PC or a Mac or another device, and finally has the files available in the cloud, hopefully. I have tried to streamline the process as much as possible but it far from being an easy path . One may believe it should be like your smartphone camera: take your picture, have it available in the cloud without doing anything as we all have a smartphone in our pocket when we shoot. In theory, it should be the same experience. In reality, it is not.

Integration still at its infancy

There are several steps which work fine, or just ok but do the job. For instance, with the new Snapbridge app of Nikon, the connection between the camera and the smartphone is done not with too many problems. There is still the strange need to switch to Wi-Fi but that’s understandable as we want to download RAW images, Bluetooth is certainly not the right technology. It used to be different, so let’s be honest, it is improving.

However, if you want to have the images downloaded thanks to Snapbridge in RAW directly uploaded to your cloud provider (OneDrive, Google Drive, Dropbox, just to name a few I have tried), Good luck… I am not saying it is impossible but after having “invested” a couple of hours trying to streamline this process, I have to admit I have failed !

I have tried another app than Snapbridge: Camera Connect and Control for Android. The “Auto Download” feature looks promising. In theory, it does exactly what I am looking for. In reality, not so trivial. First, you need to pair your camera with your smartphone by Wi-Fi. It is not done automatically, like with Bluetooth. It can’t really be as it will disconnect your smartphone from any other Wi-Fi network, right.  That said, as soon as you have done this pairing, the auto download of RAW files works very well. The app cost not much – less than $10. So, let’s say it is better than the Snapbridge experience by far thanks to the auto download feature. Of course, you have to try whether this app works for your set up (different cameras, smartphones and raw files format can have not so good results as usual) but it is really promising.

Since it takes so little effort, it’s easy to do before bed each night, or when you’re about to head out in the morning. But I would not do it “on the fly” in real time like I can do with my camera’s smartphone…

Let’s move to a next step

A “transactional” process is still missing: I would love to have a consistency check and verify all the images from my camera has been effectively saved in the cloud and I would have this done in real time – either through the phone network or at least automatically whenever the smartphone is connected to a Wi-Fi network. And delete the useless images automatically in my camera as next obvious step. If you start to download a whole photo session of hundreds of images, how can you be sure the Wi-Fi has not been disconnected and a few images are missing? If you need a manual check, the whole process is becoming useless but for some niche needs. But we are not that far… That’s a positive way to stay we can’t prevent, from my perspective, to avoid a manual download of the SD card. Yes, USB-C and auto-upload from cloud providers help, but it is still more painful than your cheap (or not) camera in your smartphone…

Auto-alignment of Time Lapses

When you want to create a time lapse, wind can shake your camera on its tripod. For many time lapses, it won’t be a problem, either because there is no wind or because the wide-angle images are not going to suffer from any visible shake. However, if you shoot with non-wide angle and especially with telephoto lens, it will certainly be a major issue.

In theory, there are different tools very capable to align time lapse members. Photoshop, as it can stack images for different purposes, has all you need. In reality, it is not designed for more than 10-20 images. So, forget Photoshop to align your time lapse members. Similarly, aligning them manually is nonsense as you can get easily 500-1’000 if not more time lapse images for just one-time lapse.

Adobe Premiere is a much better tool for such tasks but at $20+ / month, it is certainly overkill ! It does not make really sense for most of us to subscribe to the software just for aligning time lapses, right ?

That said, any tool with tracking capabilities could do the job but when it comes to the practicality, it is often a different story. For instance, I have tried Hugin, a great tool for Panoramas, free and open source. But as mentioned in different links, it can be tricky to align hundreds of images, especially if they have low contrasts, something frequent for time lapse at sunset or sunrise time. Remember, the software main goal is to create panoramas, not align time lapse members…

Basically, most of these tools have the tracking capabilities to align the members but it time consuming and sometimes more than challenging. For these reasons, I have developed and I have planned to release in the next version of Futura Photo, a feature able to auto-align several hundreds of images without any technical knowledge, nor manual featured points enabled / fine tuned by the user .

E.g., this video will show you what it did with a long-range time lapse shot from Geneva, Switzerland. Sunset are more beautiful when the wind blows but without the tool, forget the long range time lapse:

Another dramatic time lapse auto-aligned:

If you want to know more about the features and the software or the release date, please contact me directly by twitter or on the website of Futura Photo.

Why Futura Photo ?

November the 14th of 2019 has been the day of the official launch for Futura Photo 1.0. The company behind this software if Camera Futura Sàrl, headquartered in Geneva, Switzerland.

The software aims at streamlining the pre-process steps needed after a photo session. Indeed, over the last years, I have had the feelings I was wasting my time at culling images after a photo session (1) and (2) I was not always very efficient.

Of course, professional photographers who are always shooting the same kind of images have learned how to cull efficiently their thousands of images every week. But for amateurs shooting 10’000-20’000 images per year or for people who like to try and experiment, I concluded a tool was not only needed to fasten the culling itself and to do it in a more efficient way, but also to propose a “quality gate”. Many images should indeed not even pass this gate as they just not fulfil some requirements when it comes to acutance, exposure. Furthermore, duplicates have always been a struggle for a photographer. Similarly, managing hundreds or even thousands of time lapse members can be time consuming. Panorama members can easily be missed when you start to build panoramas of 10 plus images.

I could add much more examples where we would need a software to help us, photographers, even before starting the post-process work: aligning time lapse members when our camera has been shaken by wind, choosing either JPG or RAW when both shot altogether, and much more.

I have noted that over the last year, I have done a good job at keeping only the best images – only 5% maximum of what I am shooting but because I am investing time after each photo session. And these tasks are not funny. They are boring, time consuming… and from my perspective very important. I don’t keep the useless images which would “pollute” my images library, or which would require several terabytes. From these years, the need to automate these tasks have become more and more obvious. Last and maybe not least, this is important for our future in a sustainable world!

That’s Futura Photo goal and being a complement to the existing many software which are helpful to photographers, helping them spending more time doing what they like and less time at what is needed but not exactly exciting.

The need for streamlining the pre-process of a photo session

This is not the most glamorous title one can expect as image processing after a photo session is often a pain at its worst, a necessity at least. It is something photographers don’t like much to talk about. They prefer to discuss how to enhance their images. Fair enough. But execution is key, and it is not because it is boring that it is not important !

So, even before going to the processing itself, meaning classifying images in categories (Best, to be archived, to be deleted, …), enhancing the best with ad-hoc software (photoshop, lightroom, Capture One, whatever, …), there are some steps which are all uninteresting, time consuming. In particular:

  • If you shoot RAW + JPG, you need to find out what to do with either the RAW or the JPG,
  • If you shoot time lapse and manually compose them (with software like LR Time Lapse),
  • If you shoot Panoramas that you want to also manually compose,
  • If you don’t like, like me, storing dozens of similar images, you must delete first duplicates,
  • If you don’t accept some images because of their exposure, grain, or other technical issue,
  • If you shoot both Videos and still images (both will require usually dedicated workflows),
  • … and much more.

I am still surprised to see these steps have not been automated, or very partially and certainly not in an integrated way to let photographers with different need improve their productivity, but also be supported by modern technologies to help them choosing maybe not the best image, but at least to automate the files move/deletion or just to fasten these different steps whilst making them more efficient. For example, I am not aware of a tool which would help to detect which images are part of a panorama. Many software exists to let you assemble a panorama but when you shoot thousands of images, it is not so trivial to detect panorama members, from below average images.

Another example will be for the time lapses. Hundred or thousands of images with some of them part of time lapse, other not, can be tricky to detect and at least, it will take some time to sort out the whole set of images.

Last and not least, I understand the “one stop shop approach”. That’s the holy grail in software and what Lightroom (or its direct competitors) tries to achieve for most of photographers. But I am not convinced as needs can be antagonistic and one stop shop means “compromise”. This means I would rather, maybe naively, believe in long-term trends to have software working together and not just one doing everything. My point? There is still room in 2019 for new software when it comes to the image’s workflow.

Why it is important to only keep the best shots after each photo session

I see several reasons to keep only the best images after each photo session and archive or delete all other shots. I mean, we should not keep more than probably 5% of the photos we are taking. And the ratio tends even to decrease the older the photo gets. Not so many photographers have the discipline to take all the time needed to go through every image and remove duplicates, poorly exposed and badly focused photos. But there are several key advantages to do so:

First, we are not “polluted” again by average or poor images when we search in our images catalogues or when we look back at our work, whatever the reason. Furthermore, it will of course reduce drastically the storage needed. One could argue it is now so cheap it is a pretty weak reason but as I wrote already, this is good practice for our planet.

As it is painful to clean the backlog, at least it would make sense to apply the principle to any new photo session and apply it occasionally when we browse some older archived photo sessions.

That’s a classical quality gate methodology which also makes sense for photography. As 80% to 95% of images tend to be useless and let’s be honest not so great, the impact will be significant. Believe me or not, it is so good to browse only images you really like. But again, this is both a question of discipline and technology as there are so far few software to help you focusing at the best images.

The sky is not the limit: smartphones and photography

The smartphone industry has disrupted photography for most consumers. And over the last couple of years, it has also started to really focus at images quality at an impressive pace and with dramatic results. A combination of hardware and software improvements – through several cameras with different lenses and computational photography, respectively, are making smartphones really great tools. The classic camera manufacturers are completely struggling, without surprises whilst making few progresses when it comes to the integration of the devices with the social features and cloud technology so easily available with smartphones or laptops or any device, even fridges are better connected sometimes than DLSR.

However, whereas the sales are likely to drop further as the shift is not over, I can already see several limits of the smartphones. Yes, they are killing the point-and-shoot business, for good reasons but photography is much more than that.

It is the ergonomics, stupid

First, it is important to remember many people like to take photography as an end in itself. They go out or organize a photoshoot to take pictures. So, they don’t care if the device is a little bit too heavy or too bulky. What they need is the right device to take images like they want or like they need. To that respect, smartphones are hardly a match as their primary function is not to take images. Touchscreen so far can’t beat devices built on purpose for photographers with instant access to whatever customization or parameter is needed.

The limits of physics

Of course, you can’t match the sensor’s size and interchangeable lenses for dedicated purposes. No need to go in further details. To those who claim computational photography is going to disrupt much more, I think it is important to differentiate “integration of the camera with the smartphone” from post-processing, whatever it may be.

Computational photography can work with more than smartphones’ cameras

Indeed, there is no reason not to implement the technology to DSLR or mirrorless bodies when it comes to computational photography. Basically, there are different steps and the question is more how to integrate them altogether:

  • Collecting the light into a digital image through lenses and sensor (hardware stuff)
  • Computational and post-processing automated (whatever it is)
  • Storing and sharing images (social networks, cloud services).

Convergence ?

At a point of time, I tend to believe gaps are narrowing. Like internet did not kill the television, smartphones and stand-alone cameras will certainly live together for years but not like today. At the end of the day, what is specific will prevail and the new technology will have replaced the old one in what was flawed or inefficient, for the better.

The carbon footprint for being lazy after our photo shoots: why it has to change

What this article is about

It is trendy – and more important it is necessary – to reduce our carbon footprint. Let’s calculate how much a bad habit of photographers can pollute.

I am taking in average 10 to 20 thousand images per year. Many pro photographers will shoot ten times more, typically above 100 thousand images per year, if not more. At the same time, for different reasons, it happens I am working hard at keeping only the best shots. Typically, from these 10-20 thousand photos per year, I am storing only 1 or 2 thousand per year. And there is even room for improvement. I don’t think this ratio is exceptional. Other people report a typical ratio of 80-95% of useless images, whatever the reasons. However, I must confess this eradication takes me a lot of time and I do understand why people don’t do it – it should be somewhat automated. I was wondering what the impact of keeping all these useless images is. How many greenhouses gas does it generate per year ? Basically, I am wondering how much our useless images can pollute when we don’t eradicate them.

How many tons of carbon dioxide per thousand of images stored?

Simple question, difficult answer. First and foremost, there are head and tail winds: Whereas 1 Gigabytes (GB) of data require less and less CO tons every year, images are becoming bigger and bigger as new sensors let you shoot with more megapixels. Same situation for videos. It looks quite challenging to anticipate the future trends but let’s make the calculation as per today, in 2019. It is reasonable to believe head and tailwinds will not completely change the result in the next years.

Let’s try to calculate just a rough estimate…

In this article, I don’t make any calculation for videos, just for the still images. I will consider 3 categories of photographers:

  • casual photographer who typically take 5 thousand images per year,
  • enthusiast (20 thousand images per year)
  • and pro photographer (100 thousand images per year).

Casual photographers only create JPG files from their photos in this exercise, with a 24 Mega pixels camera. So, each JPG file weights typically 5 Mega Bytes (MB) each. This means 5 x 5’000 = 25 GB per year.

Enthusiasts shoot RAW, with 36 Mega pixels camera. They convert 10% in JPG, of 7.5 MB each. This means 36 . 20’000 + 7.5 . 2000 = 735 GB per year

Pros will shoot both RAW and JPG, with different cameras and sensor. Let’s make a rough estimate at 15 MB per image. This means basically 1.5 TB per year.

To summarize, I will just consider 1 TB per year per photographer. This will simplify the calculation. It will not change the whole result and it will be consistent with the kind of photographer we are looking at for this effect (mostly enthusiasts or pros).

All these numbers are arguable but that’s a good starting point for a first estimation.

Now the key question is how much carbon dioxide emissions for 1 TB ?

Several studies have proven that we need around 100 kg of Carbon dioxide emissions to store  1 TB of data on the cloud (ref. [1], [2] and [3]. Again, the calculation is quite complicated, and the range is very broad, from typically 50 kg to 2 tons. I am considering 100 kg as a conservative estimation.

This means 1 ton per year for 10 TB, after 10 years of photography as it is cumulative.

What does it mean in a sustainable world?

In a sustainable world, the average individual rate should be of 3 tons of carbon dioxide per year (ref. [4]). We are far from that level now (US: 18-20 tons per year per person, China: 6.5 tons, …) but that’s where we are going.

It is useless to say we can’t use almost 1/3rd of our yearly quota (in a sustainable planet) just for storing images. It should not be more than a couple of percents. Once again, it proves that a sustainable world will have dramatic consequences to our life. It means we should eradicate all our useless images as they represent 80-95% of this storage emission.

Conclusion

It is time to reduce our data from images and videos. Besides storing  too much and mostly useless information, it is necessary for living in a sustainable planet. Of course, one can object these data “might” be useful in the future, who knows ? At the same time, it is good practice to focus at what really matters and be able to retrieve this important information later when needed. Less is sometimes better. And we always find good excuses to refuse change. But this change is needed and in the long run, inevitable. It is time to be consistent and eradicate as a “pre-post processing step” most of the useless images, whatever useless may mean.

References

[1] – Carbon and the cloud, Stanford Magazine

[2] – Trends in Server Efficiency and Power Usage in Data Centers, SPEC 2019

[3] – The carbon footprint of a distributed cloud storage, Cubbit

[4] – Stopping Climate Change: A Practical Plan 3 Tons Carbon Dioxide Per Person Per Year, Ecocivilization

They are no rules for good photographs, but they are rules for poor photographs

A "good" image for some, but no rules can apply and some will not even like this image

As DPReview’s Nigel Danson reminds us, and to quote Ansel Adams: “There are no rules for good photographs. There are just good photographs”.

They are no rule for good photographs, fair enough, but I am convinced they are rules to define and detect the poor ones, whatever poor may mean for the photographer. In a digital world, we can take really a lot of pictures. I shoot 10’000-20’000 photos per year (a pro can shoot over 100’000 per year). I don’t use more than 1’000 of them. I like to believe it is important to delete most of them, just to make my life simpler when I will start the post process steps and when I look back at my images, for search or for other reasons.

Less is more?

Taking a lot of picture is not always bad habit, but at the end of the day, we all must cope with this huge and useless number of poor pictures. Therefore, it seems important to define some tangible rules that one can apply manually or through software to eliminate the wrong ones as early as possible in the workflow. Ideally, this should be done at “run time” during the shot itself, which is certainly possible if images are uploaded real time to the cloud and analyzed right away .

But to be more concrete, let’s say there is a need to detect and delete (non exhaustively):

  • Images poorly exposed,
  • Motion blur (not on purpose) and focus blur (not on purpose as well),
  • Useless duplicates (and whatever it may mean).

Many photographers may claim there is no way to detect poor images programmatically due to the non-deterministic nature of art. For instance, histogram might be not enough to detect poorly exposed image. At the same time, it will be difficult to convince me that when a photographer fails to take the photo like wanted, it is worth being kept as soon as one believe there is quality standard to comply with when it comes to art. It is also about being disciplined and mastering what we are doing. So, it may not be acceptable to continue working on images for which we wrongly set up too high ISOs, too slow speed or with the main subject not in focus like we wanted.

It is simple, but not easy

As a conclusion, I tend to disagree about the impossibility to detect poor images by a software. And it is certainly possible to detect poor images automatically and get rid of them. It will not be the same for any photographer, everyone might have to set up the quality level acceptable in terms of exposure, acutance and duplication.

It may be very difficult to delete all the poor images but fine tuning the parameters and the algorithms so that we get rid of most of the non interesting ones would be more than good practice. It would save time and let the photographer focus at what really matters: the good photographs, for which there is indeed no rules.

High Iso: how far a photographer should go?

Shooting at high iso is a topic highly commented. Some believe it is certainly overstated. And indeed, it is far from being so important in photography. At the same time, we should know the limits: how dark can it be? How far can we go at high ISO when shutter speed is critical? Not from a purely technical perspective but to stay consistent with our overall artistic approach. Some photographers may shoot only at very low light but that’s unusual. Most of us are just shooting at high ISO and at the same time also at lower ISO value. So, high ISO noise level is just a constraint we need to deal with.

The problem is to know for each of our camera we own the ISO limit at which we should shoot. Indeed, too high noise or too much underexposure leads to unacceptable image quality. The usual approach is purely empirical: when you believe the noise level has become unacceptable, you just don’t shoot at this value or above.

The problem with this approach is twofold: it can be biased, there is no tangible comparison until you use a scientific measurement of noise and it does not consider the fact that you may shoot with different kind of cameras (from smartphone to drones, DSLR, Full frame or small sensors). However, regardless of sensors, a photographer should keep consistency from a noise-to-signal ratio between the cameras (assuming noise-to-signal a proxy of noise level). Nobody cares which camera you have used for a photoshoot. But all images should be delivered with a similar if not equal quality level irrespective of the sensor.

I have started to measure SNR (Signal-to-Noise Ratio, proxy of noise level) for a given camera at different iso levels. The process is simple:

  1. Take photos at different ISO with the same histogram of the same object or landscape (no over or under exposure between images) but with different cameras. Images must be as similar as possible.
  2. Define a limit above which you believe noise is too high for your best sensor (Nikon D750 in my case).
  3. Define the ISO limit for each camera for the same value of SNR to ensure consistency in the quality

Results on the graph below:

Based on this method, I concluded I can shoot up to 12’800 ISO with my D750 maximum and if possible, not above 6’400. It is not that the image is not acceptable above, it is just to be sure shooting at high ISO has no significant impact on image quality according to my own standard. The SNR is indeed stable up to 6’400 ISO.

But with the Nikon D7000, an older APS-C camera, it is no more than 1’600 ISO. With my APS-C mirrorless Fuji X100s, it is 3’200 ISO (due to a more recent sensor). And with my compact Panasonic LX100, no more than… 400 ISO.

This has been a surprise. Indeed, I used to shoot way above 400 ISO with my compact but indeed, at a closer look, it is not without consequences on the image quality.

It is also proving how some sensors are just much better because their SNR stays stable (D750 or LX100) before dropping at very high iso while other are decreasing steadily (like X100s or D7000). with the former, you just shoot at whatever ISO you want below a given limit whereas with the later, you try to keep ISO as low as possible every time.

Please contact me if you want to know more about this approach and how to shoot at high iso without image quality loss.