How is Lightroom CC managing my images on Adobe's cloud ?

I have been using Lightroom Classic (LrC) for a while and I am interested in the innovations and improvements enabled by the cloud architecture of Lightroom CC (LR CC) and by a new platform – the flaws and the technical debt of Lightroom Classic are too much (in)famous.

Maybe it is just me, but I have been a little bit struggling to understand what LR CC was doing with my physical files. This post is trying to explain it to avoid to other people the pain it has been for me.

Overview

Long story short, when you import images, LR CC is creating a physical copy of your original file (RAW, JPG, whatever you shoot) on Adobe Cloud. So, you can archive your file, delete it locally,  your choice, but a copy is stored not on your machine, but on Adobe’s cloud dedicated to Lightroom CC (don’t be confused, there are several, and Adobe’s documents is another one).

However, you can configure LR CC to have a copy of some or all files and previews on your local machine.

You can delete an image but you are going to remove the physical copy of this image on Adobe’s cloud and on your local machine (unless of course if you have kept the original file before uploading it to LR CC as LR CC is creating a copy. It is this copy which is deleted by your action in LR CC).

How it works

You can find where to store a local copy of your files in Edit -> Preferences:

The path is available here :

What happens when you delete some images?

As you can see in the video above, deleting images in LR CC means… it won’t be there any longer (no more visible in your gallery nor in your albums) but Adobe will not delete the physical copy of your image before 60 days.

Remarks and conclusion

I think the overall process is poorly explained and documented by Adobe but the way it is implemented does make sense to me. It is somewhat disturbing not to have access in the cloud to your physical images (but I understand why), even more confusing by the Adobe cloud documents which have nothing to do with  images used by Lightroom CC but when you understand the rationale behind, it seems quite a nice way to use LR CC. There is a catch: you can’t use other cloud providers. This means more costs to you as some users can benefit from cloud storage already available thanks to different services (e.g. Amazon Prime or Microsoft Office 365 will both offer you plenty of storage at no additional costs when you are already subscribing to these services).

Of course, you can just store in LR CC (Adobe’s cloud) what you need for working on your files before and during the post-process and archive the original files and the published ones after post-process done thanks to LR CC elsewhere (with your usual cloud provider for instance or on your NAS / Hard drive if you are not a cloud user) but you are going to miss some added value of LR CC designed for working this way, with all your assets stored on Adobe’s cloud.

Commercially relevant for them, not cost effective for the user… unless of course you have no access to “free” (already subscribed but embedded in other services subscriptions) cloud storage.

How to upload your photos from your digital camera like your smartphone

Introduction

It is so easy to use your images from your smartphone cameras. Many photographers can’t understand why they still need to use SD card, or connect the camera through Bluetooth or Wi-Fi, and download them to a hard drive, whether a PC or a Mac or another device, and finally has the files available in the cloud, hopefully. I have tried to streamline the process as much as possible but it far from being an easy path . One may believe it should be like your smartphone camera: take your picture, have it available in the cloud without doing anything as we all have a smartphone in our pocket when we shoot. In theory, it should be the same experience. In reality, it is not.

Integration still at its infancy

There are several steps which work fine, or just ok but do the job. For instance, with the new Snapbridge app of Nikon, the connection between the camera and the smartphone is done not with too many problems. There is still the strange need to switch to Wi-Fi but that’s understandable as we want to download RAW images, Bluetooth is certainly not the right technology. It used to be different, so let’s be honest, it is improving.

However, if you want to have the images downloaded thanks to Snapbridge in RAW directly uploaded to your cloud provider (OneDrive, Google Drive, Dropbox, just to name a few I have tried), Good luck… I am not saying it is impossible but after having “invested” a couple of hours trying to streamline this process, I have to admit I have failed !

I have tried another app than Snapbridge: Camera Connect and Control for Android. The “Auto Download” feature looks promising. In theory, it does exactly what I am looking for. In reality, not so trivial. First, you need to pair your camera with your smartphone by Wi-Fi. It is not done automatically, like with Bluetooth. It can’t really be as it will disconnect your smartphone from any other Wi-Fi network, right.  That said, as soon as you have done this pairing, the auto download of RAW files works very well. The app cost not much – less than $10. So, let’s say it is better than the Snapbridge experience by far thanks to the auto download feature. Of course, you have to try whether this app works for your set up (different cameras, smartphones and raw files format can have not so good results as usual) but it is really promising.

Since it takes so little effort, it’s easy to do before bed each night, or when you’re about to head out in the morning. But I would not do it “on the fly” in real time like I can do with my camera’s smartphone…

Let’s move to a next step

A “transactional” process is still missing: I would love to have a consistency check and verify all the images from my camera has been effectively saved in the cloud and I would have this done in real time – either through the phone network or at least automatically whenever the smartphone is connected to a Wi-Fi network. And delete the useless images automatically in my camera as next obvious step. If you start to download a whole photo session of hundreds of images, how can you be sure the Wi-Fi has not been disconnected and a few images are missing? If you need a manual check, the whole process is becoming useless but for some niche needs. But we are not that far… That’s a positive way to stay we can’t prevent, from my perspective, to avoid a manual download of the SD card. Yes, USB-C and auto-upload from cloud providers help, but it is still more painful than your cheap (or not) camera in your smartphone…

Auto-alignment of Time Lapses

When you want to create a time lapse, wind can shake your camera on its tripod. For many time lapses, it won’t be a problem, either because there is no wind or because the wide-angle images are not going to suffer from any visible shake. However, if you shoot with non-wide angle and especially with telephoto lens, it will certainly be a major issue.

In theory, there are different tools very capable to align time lapse members. Photoshop, as it can stack images for different purposes, has all you need. In reality, it is not designed for more than 10-20 images. So, forget Photoshop to align your time lapse members. Similarly, aligning them manually is nonsense as you can get easily 500-1’000 if not more time lapse images for just one-time lapse.

Adobe Premiere is a much better tool for such tasks but at $20+ / month, it is certainly overkill ! It does not make really sense for most of us to subscribe to the software just for aligning time lapses, right ?

That said, any tool with tracking capabilities could do the job but when it comes to the practicality, it is often a different story. For instance, I have tried Hugin, a great tool for Panoramas, free and open source. But as mentioned in different links, it can be tricky to align hundreds of images, especially if they have low contrasts, something frequent for time lapse at sunset or sunrise time. Remember, the software main goal is to create panoramas, not align time lapse members…

Basically, most of these tools have the tracking capabilities to align the members but it time consuming and sometimes more than challenging. For these reasons, I have developed and I have planned to release in the next version of Futura Photo, a feature able to auto-align several hundreds of images without any technical knowledge, nor manual featured points enabled / fine tuned by the user .

E.g., this video will show you what it did with a long-range time lapse shot from Geneva, Switzerland. Sunset are more beautiful when the wind blows but without the tool, forget the long range time lapse:

Another dramatic time lapse auto-aligned:

If you want to know more about the features and the software or the release date, please contact me directly by twitter or on the website of Futura Photo.

Why Futura Photo ?

November the 14th of 2019 has been the day of the official launch for Futura Photo 1.0. The company behind this software if Camera Futura Sàrl, headquartered in Geneva, Switzerland.

The software aims at streamlining the pre-process steps needed after a photo session. Indeed, over the last years, I have had the feelings I was wasting my time at culling images after a photo session (1) and (2) I was not always very efficient.

Of course, professional photographers who are always shooting the same kind of images have learned how to cull efficiently their thousands of images every week. But for amateurs shooting 10’000-20’000 images per year or for people who like to try and experiment, I concluded a tool was not only needed to fasten the culling itself and to do it in a more efficient way, but also to propose a “quality gate”. Many images should indeed not even pass this gate as they just not fulfil some requirements when it comes to acutance, exposure. Furthermore, duplicates have always been a struggle for a photographer. Similarly, managing hundreds or even thousands of time lapse members can be time consuming. Panorama members can easily be missed when you start to build panoramas of 10 plus images.

I could add much more examples where we would need a software to help us, photographers, even before starting the post-process work: aligning time lapse members when our camera has been shaken by wind, choosing either JPG or RAW when both shot altogether, and much more.

I have noted that over the last year, I have done a good job at keeping only the best images – only 5% maximum of what I am shooting but because I am investing time after each photo session. And these tasks are not funny. They are boring, time consuming… and from my perspective very important. I don’t keep the useless images which would “pollute” my images library, or which would require several terabytes. From these years, the need to automate these tasks have become more and more obvious. Last and maybe not least, this is important for our future in a sustainable world!

That’s Futura Photo goal and being a complement to the existing many software which are helpful to photographers, helping them spending more time doing what they like and less time at what is needed but not exactly exciting.

The need for streamlining the pre-process of a photo session

This is not the most glamorous title one can expect as image processing after a photo session is often a pain at its worst, a necessity at least. It is something photographers don’t like much to talk about. They prefer to discuss how to enhance their images. Fair enough. But execution is key, and it is not because it is boring that it is not important !

So, even before going to the processing itself, meaning classifying images in categories (Best, to be archived, to be deleted, …), enhancing the best with ad-hoc software (photoshop, lightroom, Capture One, whatever, …), there are some steps which are all uninteresting, time consuming. In particular:

  • If you shoot RAW + JPG, you need to find out what to do with either the RAW or the JPG,
  • If you shoot time lapse and manually compose them (with software like LR Time Lapse),
  • If you shoot Panoramas that you want to also manually compose,
  • If you don’t like, like me, storing dozens of similar images, you must delete first duplicates,
  • If you don’t accept some images because of their exposure, grain, or other technical issue,
  • If you shoot both Videos and still images (both will require usually dedicated workflows),
  • … and much more.

I am still surprised to see these steps have not been automated, or very partially and certainly not in an integrated way to let photographers with different need improve their productivity, but also be supported by modern technologies to help them choosing maybe not the best image, but at least to automate the files move/deletion or just to fasten these different steps whilst making them more efficient. For example, I am not aware of a tool which would help to detect which images are part of a panorama. Many software exists to let you assemble a panorama but when you shoot thousands of images, it is not so trivial to detect panorama members, from below average images.

Another example will be for the time lapses. Hundred or thousands of images with some of them part of time lapse, other not, can be tricky to detect and at least, it will take some time to sort out the whole set of images.

Last and not least, I understand the “one stop shop approach”. That’s the holy grail in software and what Lightroom (or its direct competitors) tries to achieve for most of photographers. But I am not convinced as needs can be antagonistic and one stop shop means “compromise”. This means I would rather, maybe naively, believe in long-term trends to have software working together and not just one doing everything. My point? There is still room in 2019 for new software when it comes to the image’s workflow.

Why it is important to only keep the best shots after each photo session

I see several reasons to keep only the best images after each photo session and archive or delete all other shots. I mean, we should not keep more than probably 5% of the photos we are taking. And the ratio tends even to decrease the older the photo gets. Not so many photographers have the discipline to take all the time needed to go through every image and remove duplicates, poorly exposed and badly focused photos. But there are several key advantages to do so:

First, we are not “polluted” again by average or poor images when we search in our images catalogues or when we look back at our work, whatever the reason. Furthermore, it will of course reduce drastically the storage needed. One could argue it is now so cheap it is a pretty weak reason but as I wrote already, this is good practice for our planet.

As it is painful to clean the backlog, at least it would make sense to apply the principle to any new photo session and apply it occasionally when we browse some older archived photo sessions.

That’s a classical quality gate methodology which also makes sense for photography. As 80% to 95% of images tend to be useless and let’s be honest not so great, the impact will be significant. Believe me or not, it is so good to browse only images you really like. But again, this is both a question of discipline and technology as there are so far few software to help you focusing at the best images.

The carbon footprint for being lazy after our photo shoots: why it has to change

What this article is about

It is trendy – and more important it is necessary – to reduce our carbon footprint. Let’s calculate how much a bad habit of photographers can pollute.

I am taking in average 10 to 20 thousand images per year. Many pro photographers will shoot ten times more, typically above 100 thousand images per year, if not more. At the same time, for different reasons, it happens I am working hard at keeping only the best shots. Typically, from these 10-20 thousand photos per year, I am storing only 1 or 2 thousand per year. And there is even room for improvement. I don’t think this ratio is exceptional. Other people report a typical ratio of 80-95% of useless images, whatever the reasons. However, I must confess this eradication takes me a lot of time and I do understand why people don’t do it – it should be somewhat automated. I was wondering what the impact of keeping all these useless images is. How many greenhouses gas does it generate per year ? Basically, I am wondering how much our useless images can pollute when we don’t eradicate them.

How many tons of carbon dioxide per thousand of images stored?

Simple question, difficult answer. First and foremost, there are head and tail winds: Whereas 1 Gigabytes (GB) of data require less and less CO tons every year, images are becoming bigger and bigger as new sensors let you shoot with more megapixels. Same situation for videos. It looks quite challenging to anticipate the future trends but let’s make the calculation as per today, in 2019. It is reasonable to believe head and tailwinds will not completely change the result in the next years.

Let’s try to calculate just a rough estimate…

In this article, I don’t make any calculation for videos, just for the still images. I will consider 3 categories of photographers:

  • casual photographer who typically take 5 thousand images per year,
  • enthusiast (20 thousand images per year)
  • and pro photographer (100 thousand images per year).

Casual photographers only create JPG files from their photos in this exercise, with a 24 Mega pixels camera. So, each JPG file weights typically 5 Mega Bytes (MB) each. This means 5 x 5’000 = 25 GB per year.

Enthusiasts shoot RAW, with 36 Mega pixels camera. They convert 10% in JPG, of 7.5 MB each. This means 36 . 20’000 + 7.5 . 2000 = 735 GB per year

Pros will shoot both RAW and JPG, with different cameras and sensor. Let’s make a rough estimate at 15 MB per image. This means basically 1.5 TB per year.

To summarize, I will just consider 1 TB per year per photographer. This will simplify the calculation. It will not change the whole result and it will be consistent with the kind of photographer we are looking at for this effect (mostly enthusiasts or pros).

All these numbers are arguable but that’s a good starting point for a first estimation.

Now the key question is how much carbon dioxide emissions for 1 TB ?

Several studies have proven that we need around 100 kg of Carbon dioxide emissions to store  1 TB of data on the cloud (ref. [1], [2] and [3]. Again, the calculation is quite complicated, and the range is very broad, from typically 50 kg to 2 tons. I am considering 100 kg as a conservative estimation.

This means 1 ton per year for 10 TB, after 10 years of photography as it is cumulative.

What does it mean in a sustainable world?

In a sustainable world, the average individual rate should be of 3 tons of carbon dioxide per year (ref. [4]). We are far from that level now (US: 18-20 tons per year per person, China: 6.5 tons, …) but that’s where we are going.

It is useless to say we can’t use almost 1/3rd of our yearly quota (in a sustainable planet) just for storing images. It should not be more than a couple of percents. Once again, it proves that a sustainable world will have dramatic consequences to our life. It means we should eradicate all our useless images as they represent 80-95% of this storage emission.

Conclusion

It is time to reduce our data from images and videos. Besides storing  too much and mostly useless information, it is necessary for living in a sustainable planet. Of course, one can object these data “might” be useful in the future, who knows ? At the same time, it is good practice to focus at what really matters and be able to retrieve this important information later when needed. Less is sometimes better. And we always find good excuses to refuse change. But this change is needed and in the long run, inevitable. It is time to be consistent and eradicate as a “pre-post processing step” most of the useless images, whatever useless may mean.

References

[1] – Carbon and the cloud, Stanford Magazine

[2] – Trends in Server Efficiency and Power Usage in Data Centers, SPEC 2019

[3] – The carbon footprint of a distributed cloud storage, Cubbit

[4] – Stopping Climate Change: A Practical Plan 3 Tons Carbon Dioxide Per Person Per Year, Ecocivilization

They are no rules for good photographs, but they are rules for poor photographs

A "good" image for some, but no rules can apply and some will not even like this image

As DPReview’s Nigel Danson reminds us, and to quote Ansel Adams: “There are no rules for good photographs. There are just good photographs”.

They are no rule for good photographs, fair enough, but I am convinced they are rules to define and detect the poor ones, whatever poor may mean for the photographer. In a digital world, we can take really a lot of pictures. I shoot 10’000-20’000 photos per year (a pro can shoot over 100’000 per year). I don’t use more than 1’000 of them. I like to believe it is important to delete most of them, just to make my life simpler when I will start the post process steps and when I look back at my images, for search or for other reasons.

Less is more?

Taking a lot of picture is not always bad habit, but at the end of the day, we all must cope with this huge and useless number of poor pictures. Therefore, it seems important to define some tangible rules that one can apply manually or through software to eliminate the wrong ones as early as possible in the workflow. Ideally, this should be done at “run time” during the shot itself, which is certainly possible if images are uploaded real time to the cloud and analyzed right away .

But to be more concrete, let’s say there is a need to detect and delete (non exhaustively):

  • Images poorly exposed,
  • Motion blur (not on purpose) and focus blur (not on purpose as well),
  • Useless duplicates (and whatever it may mean).

Many photographers may claim there is no way to detect poor images programmatically due to the non-deterministic nature of art. For instance, histogram might be not enough to detect poorly exposed image. At the same time, it will be difficult to convince me that when a photographer fails to take the photo like wanted, it is worth being kept as soon as one believe there is quality standard to comply with when it comes to art. It is also about being disciplined and mastering what we are doing. So, it may not be acceptable to continue working on images for which we wrongly set up too high ISOs, too slow speed or with the main subject not in focus like we wanted.

It is simple, but not easy

As a conclusion, I tend to disagree about the impossibility to detect poor images by a software. And it is certainly possible to detect poor images automatically and get rid of them. It will not be the same for any photographer, everyone might have to set up the quality level acceptable in terms of exposure, acutance and duplication.

It may be very difficult to delete all the poor images but fine tuning the parameters and the algorithms so that we get rid of most of the non interesting ones would be more than good practice. It would save time and let the photographer focus at what really matters: the good photographs, for which there is indeed no rules.

RAW images are finally supported by Microsoft explorer

(About the new Microsoft Raw Image Extension on Windows 10)

Until recently, the Microsoft explorer strategy with regards to RAW files has been “not very consistent” – to say the least. For instance, it has indeed been possible for a while to display in the explorer thumbnails of Canon and Nikon raw files (.CR2 and .NEF files) but most of the other proprietary files could not be displayed. It was mandatory to use a 3rd party viewer. There are quite a few, free, and which works well. But using another tool when all you need is to browse quickly thumbnails, or copy/paste files is clearly overkilled.

Early January 2019, Microsoft has released a new application for Windows 10 to fix this well-known issue. Basically, it adds native viewing support for RAW format images: Microsoft Raw Image Extension.

RAW and JPG, different formats, same experience?

It is now possible to view the thumbnails of quite a few RAW formats as any JPG image. If you are using Microsoft Photos (this is still not a very mature application but, let’s be honest, improving year after year), you can similarly have a great snapshot of your RAW image full screen. Again, you can get this with 3rd party viewer but it makes our life easier to have it well integrated like for JPG files. Similarly, you have access directly to the basic EXIF metadata of the RAW files when rolling the mouse over the thumbnail of the RAW image, like for any JPG.

Which file formats are supported?

This Microsoft application is based on LibRaw, a quite well know open source library when it comes to RAW files management. So, in theory, it should support most of RAW files format from most cameras.

I have tried with RAW files from Sony (.ARW), Nikon (.NEF), Canon (.CR2), Panasonic (.RW2) and Fuji (.RAF) and everything is fine at first glance.

How to use it?

First, you need to know this application is not available yet on the official Windows 10 latest release. You need to join the Windows 10 Insider Preview, and then download the latest build available from the Windows 10 settings menu (search for Windows 10 update). Finally, you can download and install from the Microsoft store the application itself (Raw Image Extension).

If you don’t want to do this, you will need to wait a few weeks or months, but the application should be available in 2019 anyway. Either be patient or be bold.

For the software developers

So far, developers must use specific library to read and convert RAW files (For example, Image magick). Thanks to the new application, it is possible to re-use the Shell files if you need for instance to display gallery of raw images, like for any JPG file. This is nice as it will improve performances of viewer displaying raw files. It is no more needed to convert the RAW in JPG, this has been done by Windows 10.

Limitations

Be aware the below limitations might change as I am not using the final version of Windows 10 to be released with the RAW image extension.


(Tests done the 25/03/2019 on W10 build 18362.1)

  1. Extra Large Icon in the explorer is far from being extra-large for any 4K screen. They are like they use to be… which means they are ridiculously small for 2019 monitors.
“Extra Large icons” on a 4K screen
  • LibRaw is not yet supporting Canon latest RAW format .CR3, so will be the case for the application. It probably be in the future, but it is yet not even being developed at the moment, as far as I can check.
  • One more thing: for developers, there is no change in the Extra-Large Thumbnails size from the shell file. They are still no bigger than 1024 width – not exactly extra-large by 2019 standards.

Conclusion

From the tests done on the latest Windows 10 build, Microsoft is still not at the level one can expect when it comes to photographers’ main features, but it is improving maybe not quickly but at least steadily with the 2018 improvements of the Microsoft Photo application and this new Raw image extension application to be released in 2019.

A quick overview of the challenges when we want to store our images or videos

The problem has been discussed plenty of time, but I  am working at keeping it simple. Basically, it is about:

  • How to organize your files
  • How to store them
  • How to be sure your back-up strategy is resilient to different risks

How to organize your files

There are quite a few articles or blog about the topic (this one is recommended, translate it in English if you need), but I like to put it down to something quite simple:

  • We need to have folders even if we use tags and metadata. Folders should be equal to a photoshoot and it is important to come back to what really happened the way it happened. Tags will make you losing sight of this context. They are good for search and retrieval. Any picture is always part of a photoshoot. This should not be forgotten.
  • There are different ways to organize folders but again a good principle would be per year. And in each year, per event of main category. Or by month if you really shoot a lot. Anyway, you got the point.

Smartphone complexity

The device has become for many the only camera they use. For any photographer, it can be useful as well. The way the operating system stores the data is mostly hidden to let the user browse the images in a different way they are stored. This sounds like a fair principle, focusing at the user experience – it is trying to hide some technical complexity, but it is bringing it back in a somewhat unpleasant way as you need to understand where and how the actual files are stored and manage them accordingly like any other digital asset. It is what it is, spend time and learn how to manage the folders and the images in your smartphone(s) like any other device (for Android, some information available here).

How to store them

Again, this is a topic discussed many times (this article looks like a good introduction to this topic). I would nevertheless consider the different options:

  • Hard drive of course
  • Back-up Hard drive (external or internal, or both)
  • Cloud back-up
  • Back-up 2nd computer for clouds data

About cloud provider, be sure they store the images at their original quality. It is a back-up solution, so you need to have the original stored. Read also how they use the data and where (which country) the data are stored. Have a look, typically annually, at the company behind. This is a quick check to be sure you are working with the right organization for your needs, and which can propose some long-term safety for them. You don’t want to change provider every couple of years.

Main risks

The most basic one, still ignore by many, is the hardware failure, the hard drive. The point is not whether it has become very rare with SSD or not, it is by nature something which may happen anytime. It is a risk to be considered.

Another risk, quite a painful one, is certainly to have your computer stolen, and same for your back-up drive. Really unpleasant when both are stolen at the same time as being at the same location. If you have a look at statistics, you will discover that this event if far from being unlikely considering you need to evaluate it for a life time.

Fire / flooding or other natural event look very unlikely for many but again, over your whole life, the probability to face such an event is certainly non-equal to zero even if the odds remain in your favour. But there is no reason not to be protected from this risk as well.

The last main risk would be to have your password of your cloud provider stolen, provided you don’t have double authentication or/and your device have at the same time used to delete your whole data set. Unlikely, but not impossible.

Risks management

Below a basic summary:

 Risk vs. storage solution. “N” means you are not protected against this risk. Hard Drive failure Natural disaster in your home Hardware stolen Password stolen Major natural disaster
Hard Drive N N N Y/N N
Back up Hard Drive,  external Y Y Y Y N
Back up Cloud provider Y Y Y/N N Y

Conclusion

In a world of digital data, I would not underestimate the risks and at the same time, it is important to keep things simple. So, I have my own strategy to be protected – as far as I can estimate them – against any threat:

  • Any digital asset is saved on my hard drive of my desktop with an auto-sync back-up with a “main stream” cloud provider (Microsoft, Amazon, Google or Apple).
  • I am saving on a yearly basis the data on an external drive that I stored in a different place (someone of my family keep it and as we meet every Christmas, it is easy to remember I need to bring the updated data)
  • I have another back-up on my laptop, auto-sync thanks to the cloud provider.

This means I have at least 4 data set stored in at least 3 different locations whereas all I need to do is a manual yearly back-up. Easy to manage. And I feel safe.