If you have tried to shoot a time lapse with a telephoto lens, you will certainly know that it can be quite challenging. The biggest challenges will be:
Weather must be compatible,
Be sure your sensor is perfectly clean,
Manage the wind shakes on the tripod.
Nothing original ? Wait, these topics are far from being trivial or “business as usual”.
Weather forecast for Long Range Time Lapse
You will need much more than the usual classic forecasts. It seems to be quite close to astrophotography. So, in theory, this site should do the job.
Really ? Well, not perfectly, even if it is useful. Because one of the most important criteria, contrast, is not measured or anticipated locally (or tell me where please !). They are heavily reliant on local specificities as contrast will be lowered by emission of particles (for instance). The experience, in my region, shows that as expected, a clear day after a rainy and windy day looks much better. But there are plenty of exceptions and counter examples. Finally, I came to the point I need to check contrast the day before my shooting day or the morning of it and nothing else can work. I am using former images to evaluate the contrast.
A clean sensor
It is not like for any other kind of photo session. Most of telephoto lens are not as opened as other prime are. So, any dust would be much more visible. Furthermore, haze is inevitable even if the day chosen will limit it (see former paragraph). Therefore, dehazing tools during post-process will be helpful. But dehazing tools enhance dust as well…
As a conclusion, a rigorous approach of sensor cleaning is a must have. Either by a pre-check just before starting the time-lapse or by a systematic cleaning of the sensor.
Not an easy one… With telephoto lens, even if the wind is negligible, aligning the images will be almost mandatory. You will need a dedicated software for this purpose.
On top of the above challenges, it is important not to forget anything:
Scenario: what you want to achieve, how (intervals, number of shots, shooting RAW or not, which focal, …),
Weather forecast (don’t miss the day and remember in particular contrast and turbulences), clouds as well as clouds are essentials in a time lapse,
Programmation of the camera (much easier at home than on the field where it could be dark and cold) for the time lapse,
Camera and lens at the right temperature: the gear will need at least 20 minutes outside to be at the right temperature. Failing to do so will change the focus, for instance and the focus must be manually done most of the time for these photo sessions.
Set most/all parameters to Manual (focus, exposure, ISO, …) and if not to what is needed. Time Lapse are frequently shot manually and this is more true with Long Range ones.
Please let me know if I have missed something, happy long-range Time Lapse!
P.S. : cover image – Mont Blanc from Geneva during the lock down. Over 80 km away. Exceptionally low pollution. Nikon Z50 – Focal: 1’000 mm. The bright spot at the bottom center of the image is the “Refuge du Goûter” from which most of the mountaineers start to climb the summit.
When you want to create a time lapse, wind can shake your camera on its tripod. For many time lapses, it won’t be a problem, either because there is no wind or because the wide-angle images are not going to suffer from any visible shake. However, if you shoot with non-wide angle and especially with telephoto lens, it will certainly be a major issue.
In theory, there are different tools very capable to align time lapse members. Photoshop, as it can stack images for different purposes, has all you need. In reality, it is not designed for more than 10-20 images. So, forget Photoshop to align your time lapse members. Similarly, aligning them manually is nonsense as you can get easily 500-1’000 if not more time lapse images for just one-time lapse.
Adobe Premiere is a much better tool for such tasks but at $20+
/ month, it is certainly overkill ! It does not make really sense for most
of us to subscribe to the software just for aligning time lapses, right ?
That said, any tool with tracking capabilities could do the job but when it comes to the practicality, it is often a different story. For instance, I have tried Hugin, a great tool for Panoramas, free and open source. But as mentioned in different links, it can be tricky to align hundreds of images, especially if they have low contrasts, something frequent for time lapse at sunset or sunrise time. Remember, the software main goal is to create panoramas, not align time lapse members…
Basically, most of these tools have the tracking capabilities to align the members but it time consuming and sometimes more than challenging. For these reasons, I have developed and I have planned to release in the next version of Futura Photo, a feature able to auto-align several hundreds of images without any technical knowledge, nor manual featured points enabled / fine tuned by the user .
E.g., this video will show you what it did with a long-range time lapse shot from Geneva, Switzerland. Sunset are more beautiful when the wind blows but without the tool, forget the long range time lapse:
Another dramatic time lapse auto-aligned:
If you want to know more about the features and the software or the release date, please contact me directly by twitter or on the website of Futura Photo.
It is trendy – and more important it is necessary – to
reduce our carbon footprint. Let’s calculate how much a bad habit of
photographers can pollute.
I am taking in average 10 to 20 thousand images per year.
Many pro photographers will shoot ten times more, typically above 100 thousand
images per year, if not more. At the same time, for different reasons, it
happens I am working hard at keeping only the best shots. Typically, from these
10-20 thousand photos per year, I am storing only 1 or 2 thousand per year. And
there is even room for improvement. I don’t think this ratio is exceptional.
Other people report a typical ratio of 80-95% of useless images, whatever the
reasons. However, I must confess this eradication takes me a lot of time and I
do understand why people don’t do it – it should be somewhat automated. I was
wondering what the impact of keeping all these useless images is. How many greenhouses
gas does it generate per year ? Basically, I am wondering how much our useless
images can pollute when we don’t eradicate them.
How many tons of carbon dioxide per thousand of images stored?
Simple question, difficult answer. First and foremost, there
are head and tail winds: Whereas 1 Gigabytes (GB) of data require less and less
CO tons every year, images are becoming bigger and bigger as new sensors let
you shoot with more megapixels. Same situation for videos. It looks quite
challenging to anticipate the future trends but let’s make the calculation as
per today, in 2019. It is reasonable to believe head and tailwinds will not
completely change the result in the next years.
Let’s try to calculate just a rough estimate…
In this article, I don’t make any calculation for videos,
just for the still images. I will consider 3 categories of photographers:
casual photographer who typically take 5
thousand images per year,
enthusiast (20 thousand images per year)
and pro photographer (100 thousand images per
Casual photographers only create JPG files from their photos in this exercise, with a 24 Mega pixels camera. So, each JPG file weights typically 5 Mega Bytes (MB) each. This means 5 x 5’000 = 25 GB per year.
Enthusiasts shoot RAW, with 36 Mega pixels camera. They convert 10% in JPG, of 7.5 MB each. This means 36 . 20’000 + 7.5 . 2000 = 735 GB per year
Pros will shoot both RAW and JPG, with different cameras and
sensor. Let’s make a rough estimate at 15 MB per image. This means basically 1.5
TB per year.
To summarize, I will just consider 1 TB per year per photographer.
This will simplify the calculation. It will not change the whole result and it
will be consistent with the kind of photographer we are looking at for this effect
(mostly enthusiasts or pros).
All these numbers are arguable but that’s a good starting
point for a first estimation.
Now the key question is how much carbon dioxide emissions for 1 TB ?
Several studies have proven that we need around 100 kg of
Carbon dioxide emissions to store 1 TB of
data on the cloud (ref. ,  and . Again, the calculation is quite complicated,
and the range is very broad, from typically 50 kg to 2 tons. I am considering 100
kg as a conservative estimation.
This means 1 ton per year for 10 TB, after 10 years of
photography as it is cumulative.
What does it mean in a sustainable world?
In a sustainable world, the average individual rate should
be of 3 tons of carbon dioxide per year (ref. ). We are far from that level now
(US: 18-20 tons per year per person, China: 6.5 tons, …) but that’s where we
It is useless to say we can’t use almost 1/3rd of our yearly quota (in a sustainable planet) just for storing images. It should not be more than a couple of percents. Once again, it proves that a sustainable world will have dramatic consequences to our life. It means we should eradicate all our useless images as they represent 80-95% of this storage emission.
It is time to reduce our data from images and videos. Besides storing too much and mostly useless information, it is necessary for living in a sustainable planet. Of course, one can object these data “might” be useful in the future, who knows ? At the same time, it is good practice to focus at what really matters and be able to retrieve this important information later when needed. Less is sometimes better. And we always find good excuses to refuse change. But this change is needed and in the long run, inevitable. It is time to be consistent and eradicate as a “pre-post processing step” most of the useless images, whatever useless may mean.
They are no rule for good photographs, fair enough, but I am convinced they are rules to define and detect the poor ones, whatever poor may mean for the photographer. In a digital world, we can take really a lot of pictures. I shoot 10’000-20’000 photos per year (a pro can shoot over 100’000 per year). I don’t use more than 1’000 of them. I like to believe it is important to delete most of them, just to make my life simpler when I will start the post process steps and when I look back at my images, for search or for other reasons.
Less is more?
Taking a lot of picture is not always bad habit, but at the end of the day, we all must cope with this huge and useless number of poor pictures. Therefore, it seems important to define some tangible rules that one can apply manually or through software to eliminate the wrong ones as early as possible in the workflow. Ideally, this should be done at “run time” during the shot itself, which is certainly possible if images are uploaded real time to the cloud and analyzed right away .
But to be more concrete, let’s say there is a need to detect and delete (non exhaustively):
Images poorly exposed,
Motion blur (not on purpose) and focus blur (not on purpose as well),
Useless duplicates (and whatever it may mean).
Many photographers may claim there is no way to detect poor images programmatically due to the non-deterministic nature of art. For instance, histogram might be not enough to detect poorly exposed image. At the same time, it will be difficult to convince me that when a photographer fails to take the photo like wanted, it is worth being kept as soon as one believe there is quality standard to comply with when it comes to art. It is also about being disciplined and mastering what we are doing. So, it may not be acceptable to continue working on images for which we wrongly set up too high ISOs, too slow speed or with the main subject not in focus like we wanted.
It is simple, but not easy
As a conclusion, I tend to disagree about the impossibility to detect poor images by a software. And it is certainly possible to detect poor images automatically and get rid of them. It will not be the same for any photographer, everyone might have to set up the quality level acceptable in terms of exposure, acutance and duplication.
It may be very difficult to delete all the poor images but fine tuning the parameters and the algorithms so that we get rid of most of the non interesting ones would be more than good practice. It would save time and let the photographer focus at what really matters: the good photographs, for which there is indeed no rules.
Shooting at high iso is a topic highly commented. Some believe it is certainly overstated. And indeed, it is far from being so important in photography. At the same time, we should know the limits: how dark can it be? How far can we go at high ISO when shutter speed is critical? Not from a purely technical perspective but to stay consistent with our overall artistic approach. Some photographers may shoot only at very low light but that’s unusual. Most of us are just shooting at high ISO and at the same time also at lower ISO value. So, high ISO noise level is just a constraint we need to deal with.
The problem is to know for each of our camera we own the ISO limit at which we should shoot. Indeed, too high noise or too much underexposure leads to unacceptable image quality. The usual approach is purely empirical: when you believe the noise level has become unacceptable, you just don’t shoot at this value or above.
The problem with this approach is twofold: it can be biased, there is no tangible comparison until you use a scientific measurement of noise and it does not consider the fact that you may shoot with different kind of cameras (from smartphone to drones, DSLR, Full frame or small sensors). However, regardless of sensors, a photographer should keep consistency from a noise-to-signal ratio between the cameras (assuming noise-to-signal a proxy of noise level). Nobody cares which camera you have used for a photoshoot. But all images should be delivered with a similar if not equal quality level irrespective of the sensor.
I have started to measure SNR (Signal-to-Noise Ratio, proxy
of noise level) for a given camera at different iso levels. The process is
Take photos at different ISO with the same histogram of the same object or landscape (no over or under exposure between images) but with different cameras. Images must be as similar as possible.
Define a limit above which you believe noise is too high for your best sensor (Nikon D750 in my case).
Define the ISO limit for each camera for the same value of SNR to ensure consistency in the quality
Results on the graph below:
Based on this method, I concluded I can shoot up to 12’800 ISO with my D750 maximum and if possible, not above 6’400. It is not that the image is not acceptable above, it is just to be sure shooting at high ISO has no significant impact on image quality according to my own standard. The SNR is indeed stable up to 6’400 ISO.
But with the Nikon D7000, an older APS-C camera, it is no more than 1’600 ISO. With my APS-C mirrorless Fuji X100s, it is 3’200 ISO (due to a more recent sensor). And with my compact Panasonic LX100, no more than… 400 ISO.
This has been a surprise. Indeed, I used to shoot way above 400 ISO with my compact but indeed, at a closer look, it is not without consequences on the image quality.
It is also proving how some sensors are just much better
because their SNR stays stable (D750 or LX100) before dropping at very high iso
while other are decreasing steadily (like X100s or D7000). with the former, you
just shoot at whatever ISO you want below a given limit whereas with the later,
you try to keep ISO as low as possible every time.
Please contact me if you want to know more about this
approach and how to shoot at high iso without image quality loss.
in photography is well known and whereas sometimes done on purpose, it mostly
comes as a non-desirable effect, because of poor exposure (worst case) or at
least reaching the limits of the sensor range (best case). By clipping, I mean both
blowing highlights and clipping blacks. The topic has been debated countless
times in different forums and blogs.
As a summary, some people believe it does not really matter
as long as the photo is great and other advocate why and how to avoid it. Other
rightfully point out it is sometimes better not to fix it whereas other explain
in detail how to do it the right way.
This is a classic case of different opinions in photography
between those who do not want to consider something else than the purely
artistic result and the scientists obsessed by being consistent with some
physical principles. As usual too, both are right and wrong at the same time.
Indeed, what matters in photography is the result, the emotions a photography
can carry, and whether you like it. Period. Clipping, no clipping, who cares.
At the same time, it is true to say that blowing your sensor which can no
longer deliver any information but “I am blown” (white burned) or “I am blind”
(black clipped) is not really what someone can call good practise, to say the
I am trying in this post to find a way to make all these
opinions somewhat aligned, in a very much Swiss-like consensus way.
How to detect it and how to fix it
There are also plenty of information about the topic. I
would recommend reading:
Most of the people know well the “physical” clipping: when the sensor is blown. Technically speaking, it means the pixels of a given channel (R, G, B) or its luminance (Based on the square root of R, G, B weighted according to the human eye characteristics) is at its maximum value (typically 255 for 8 bits JPG) or its minimum (0).
But it is also important to remember that what matters is
the “visual” clipping: the pixels that are almost blown or clipped also matter
because (at least for JPG images), there can be no way to really fix them
properly and get information from the clipped regions of an image.
Let’s have a look at the clipped pixels highlighted in blue for the blacks and red for the highlights in the image below. First, one could argue that using JPG for such conditions is not the best idea, RAW would have been by far a better choice but without going to start another countless debate RAW vs. JPG, the image has been poorly exposed as there is no clipped pixels in black (they would be in coloured in blue in the image below) whereas they are quite a few blown ones (in red below). So, basically, it says the image should have been significantly less exposed..
But whereas the number of actuals blown pixels in red is not
so significant, the number of visually clipped pixels is at an unacceptable
level. It is making the image ugly whereas it was an interesting one. There are
almost blown from a physical perspective, but for our eye, they are just blown…
you can try to reduce highlights or exposure, there is basically no information
recorded into the sunny mountains part of the photo. The image will stay poor.
So, what matters is not the truly clipped pixels but those which look clipped. Using
Lightroom or other software tool is not enough even if, again, you can’t do
much to fix it when you shoot JPG. That’s a good transition to the next point.
Clipping is not the same animal when you shoot RAW or JPG
I believe it does make sense to differentiate JPG from RAW
images when it comes to clipping. For RAW images, with modern sensors, clipping
images is rare. Or you really do it on purpose. Or you have no idea how to use
your camera’s exposure systems! The below example shows how tolerant sensors
are now to clipping:
I know it is not so simple and you can clip some parts of a photo
despite your goodwill and expertise whilst shooting RAW. My point, however, is
to say it is rarely a problem and it is easy to identify and to anticipate as
it will only concern extremely high contrasted images.
When it comes to JPG, this is a totally different story. It can
be easy to clip parts of an image and it can be difficult to fix it as we have
seen above. What matters is first to know quite well how to detect that the
image will have some clipping. Second, you need to know whether it is a problem
for your image or it is not. There is no good answer to this (from my
perspective, though, it will very often be a problem). One approach would be of
course to shoot RAW anytime there is a risk of clipping, just to have more
latitude in the process, but it is not always possible or desirable. At least
you know what to do. So, it looks important to understand the causes and the
consequences of clipping and how RAW can fix it while bringing the usual inconvenient
of shooting RAW (processing time, file size, buffer limits, …). If you don’t
shoot RAW, you normally have reasons for this choice. This is a good transition
to the next point: this is where good and bad clipping matters as well in your
The Good clipping and the bad clipping
The bad clipping is the one you should not get. Just expose
better your image by underexposing it when you have bright parts or
underexposing it when you have potentially too many black clipped pixels.
The good clipping is just inevitable. Below an example:
When we analyse the image below, we can see we have both blacks
clipped and highlights burnt. In red the burned pixels, in pink the “visually”
clipped ones. In dark green the black clipped ones and in light green the “visually”
Having significant both red and green zone just say you are
going beyond the capabilities of your sensor. Just buy a better one with a
higher dynamic range… or use an artificial way (flash, umbrella, filters …) to
decrease the contrast, which of course is not always possible or desirable
depending on the kind of pictures you are shooting.
Conversely, if you aim at having high (or low) keys images, the
result will be clipped, fair enough but the pre-processed images – before you
start to work at them, the RAW images or the JPG out of the camera should not
be clipped. And to illustrate this, a cute gallery that I like of high
I like these images but I would not bet they were clipped out
of the camera.
Good principle: clipping is bad
Long story short, it will be difficult to convince me
clipping is not bad. Indeed, if you are looking to shoot high or low keys image
or if you want to stylize your images, that’s more a post process thing. If you
know what to do, you can argue “I clip on purpose” but most of the time,
clipping is just bad. Your sensor doesn’t provide information any longer but a
very black and white approach of the reality. What you will do in the post
process is a different discussion, when you shoot, and you anticipate clipping,
unless knowing exactly why, you should just take whatever it takes to limit it
(thanks to under/over exposing or bracketing) or avoid it (same actions + RAW +
Conclusion and summary
Let’s start by another example. From my perspective, this
image below is poorly exposed, over clipped in a white grey ugly sky:
The light was terrible, due to some haze caused by hot air.
This image looks ugly to me whereas Mopti is such a dramatic city and I have
tried to post process it, there was no way to fix it (I was travelling, was
short on time, and I did not see a way to avoid clipping). Light is bad, it is
what it is.
My point: clipping is (very often) bad even if you can’t
avoid it. They may be some counter examples (try to shoot an image of a polar
bear in the artic without clipping the snow…) but they demand to have at least understood
how to produce a pleasant image and taking counter measure to reduce the visual
impact (shooting RAW, shot only when there are some shadows to produce some
darker zones, …).
When you always shoot the same kind of picture, you know what
you are doing. You don’t really need the following conclusion as you have no
problem to deliver images you are familiar with. But it is also good in life to
try new things. And it is good to remember some good principles because when
you shoot new subjects, in a new way, in new places, you will have many reasons
to fail delivering great images. It is good to remember some basic principles.
Beyond all the discussions and remarks, I like to remember something easy not
Shoot whatever you like, but clipping is bad.
The more you know how to detect it, avoid it or at least manage it, the better. It is not a fight between art and science, it is about discipline.
I have written several times that technical innovation can be either a way to foster your creativity or could be, most of the time, a useless distraction. I don’t say I am opposed to innovation, that’s more the other way around of course, but I like to believe one should always remember the basic:
1. Subjects’ choice
Whatever the technology and the gear, and even if you know how to post process well images, you need to be creative and to have the artistic skills if you want to create “great” images. That’s not the bottom line, in my humble opinion, but I like to believe it starts here: learn to be creative, be yourself and express yourself.
2. Shooting skills
Some photographers have “the eye”, most have not. You can’t hardly learn that. Some know how to compose and when to shoot.
3. Post process and technology
Yes, never be overwhelmed by them, they are nothing but tools useful for the artist, but it sounds more important than never to know everything about the photography’s technology and how to post process images.
At the end of the day, photographers who excels at the three pillars of photography are usually admired or, at least, can produce amazing pictures. Know where you are, and in which topics you need to improve yourself!
The digital revolution may have began around 1999 or 2000 with the first real DSLR of Nikon and Canon. Almost 15 years later, evolution will continue, every quarter great cameras, software, or new web services are released, but I am more and more believing it is the end of the digital revolution. And that’s good news for photography because we may be able to focus again at what really matters: the picture, not the technology.
Cheap point and shoot cameras and smartphones are making everyone a photographer. Modern sensors and skilled engineers allow everyone to take very decent shots, should they have no knowledge of photography. Digital filters and photosharing make the pictures looking even better and available right away for those who matters to everyone. Anonymous can become very famous thanks to Instagram, much more than many legendary photography. So what? That’s fine, just the consequences of the modern digital revolution. It is time to learn living with it.
We have learned HDR, digital filters, advanced post processing, and much more during the last years. We can now have a small camera with a x40 zoom for less than a fraction of the price of a whole set of lenses we used to need ten years ago. Or a mirrorless, or a tiny compact taking better pictures that DSLR a few years ago. We can store and share on line so easily nowadays. Much more will come, of course, and we will have to adapt. But I am wondering whether most of the breakthrough might not be behind us. And that’s also good news. Revolutions are exciting but they distract us, when they don’t exhaust us. A necessary evil, but still an evil.
Here above an example of how my pictures have evolved in 20 years while mountaineering! Is it better or worst? It does not matter, things have changed, and dramatically to say the least.
No revolution lasts forever
Mirrorless did not change anything to this revolution even if they are great cameras and improved the revenues of major vendors. I like to say they rang the bell: this is the end. We are entering a new era. Despite being a major innovation, it does not change so much the game. And I doubt that Lytro would bring anything significant too by the way.
Same for Google+ and Facebook recent photosharing improvements. Photosharing is becoming a commodity nowadays. It may be good for every one, but it won’t change the game.
The bottom line
We are getting bored with the revolution. We can now focus again at what really matters: taking pictures. We don’t have to spend weeks testing the new stuff, we have to spend weeks focusing at creativity, photography and what we want to show to the others. It’s no more about software and hardware, it’s about life and creativity.
Many photographers have never stopped working this way, fair enough, but I like to believe they were really lost in the turmoil of this revolution. The dust is settling done, so I want to see in 2013 the new raise of great photographers, not those showing HDR on Flickr, their meal on instagram, or selfie on Facebook but those who have something to say.
Every revolution evaporates and leaves behind only the slime of a new bureaucracy – Franz Kafka
Whereas photosharing has become so popular for the last years, it is well known that most of the time, looking for “good photographers” – which means someone who takes pictures YOU like, has become more and more challenging. Arthur Chang has already written an excellent post about this. Curation is indeed a real challenge as the flow of new pictures is getting each month even bigger.
Some may argue they can always find good picture easily. I can’t and more important, I am struggling finding photographers I really like. That’s surprising because I am a very versatile photographer and I can like a lot of different “species” of photographers. But the reason is trivial. There are now too many pictures!
Too much information
I don’t know Arthur, but he seems to have tons of good ideas about curation of your friends’ photos. And obviously, there is some real room for improvements. I have had several times some discussions with Pictarine‘s founder similarly. There service is a great curating tool, but it still does not help with information overload, at least so far. That’s more the other way around, you are not going to miss any picture! And again, I don’t know whether my existing contacts are really so close to connecting me with the photographers I like. In theory, yes, and the only good tool I know, flexplore, is doing a really decent job. Most of the pictures are beautiful. It helps a lot to discover great pictures, but not to connect with photographers I like because a good picture does not mean you will like the work of the photographer! At the end of the day, photosharing is about people, not photography.
Most of the photosharing services are egocentric, which can be fine, fair enough. But it is not enough. When you like photography per se, not only because you need to have some people who are faving your photography just because you faved theirs, you would love browsing for more. You want to be surprised, you need emotions, you want to discover. And like said Thomas Hawk, most of the photographers’ work will stay unknown by those who would like to see it. More over, it is now so easy to engage and communicate, you would love to contact them but you can’t because there are lost in the noise of “too much pictures” and “very limited explore features”.
What a good picture means
OK there is maybe Flexplore again, but it is still very limited. So I don’t know any tool good at showing me some interesting pictures for ME, and I have no doubt someone else will not find interesting the same pictures than I do. And that’s the point. Some services have tried to quantify the aesthetic of a picture. I tried it, and I am not convinced! Other clever tools exist to auto-tag pictures and autocrop them (Which means they could quantify a lot and help you a lot). But no real “discovery & explore” tool aside from Arthur’s criteria:
Quantity of views
Quantity of actions taken
Quality of person who viewed or acted (based on their own accumulated algorithm results)
I am sure this tool would need some personalization because, again, there is no way to find and absolute way to classify photographies. I can’t care less about puppies’ pictures, some would love seeing thousands of them, I respect that. Conversely, I like HDR, B&W and many other kind of pictures, some love or dislike HDR or B&W and so on.
We are still far – well at least as far as I can know – given how poor the existing libraries are able to quantify the aesthetic of a picture and also because auto-tagging is still at its infancy. But for sure there is here a fantastic subject for brave and talented entrepreneurs… In the meantime, I would love seeing Pictarine and Arthur’s project helping me on my urgent need for good pictures!
Some people can lose all their digital work in a few minutes, that’s still very rare. More frequent, your hard drive can crash, any time, without notice. At the end of the day, our digital assets are just becoming so important we cannot live without a clear back-up strategy. For a photographer, it cannot be more important.
You can’t have everything just in the clouds, that’s too dangerous. Without being paranoiac, services can shut down, someone can steal your password and delete your files, and maybe more important, it is good to keep control on your assets.
That said, the main risk today is still a hard drive crash. So it is very dangerous not to back-up your work on at least another source.
A robbery could make you loosing all your hard drive, but if you have one in another locations.
Back-up is boring, sometimes even painful, but always time consuming, and can cost significantly. It is like going to the dentist. No one likes that, but we have to.
Defining the minimum back-up strategy
My “minimum” back-up strategy is to store my digital assets on:
a desktop (or a laptop) hard drive,
the cloud thanks to Google drive (or dropbox, or skydrive),
another hard drive.
Actually, I am also using a second hard drive, not located in my apartment. I am using it only once per year. It is not so redundant as I don’t have any pass word on my desktop (I don’t need one). So I can see a scenario, unlikely of course, where I could lose everything during a robbery. Thanks to this second hard drive not in the same place, that’s look impossible.
Clouds services are not that cheap. Google drive will cost you 100 € per year to store 200 Mo, so that’s way more expensive than buying a hard drive! But very flexible, and you can have access to your images from any device, anywhere on Earth. That said, it is not a very cheap way to back-up your images, but it is not only a back-up service, though. Dropbox is even twice as expensive than Google, and for photography back-up, does not bring something else imho. Box can be expensive if you have just a couple of hundreds of megabytes, but is becoming very competitive for storage above 500 Mo.
The bottom line
Keep it simple. But do it.
It has become very affordable to secure all your digital assets, don’t miss this chance… and be sure your data are resilient to most of the threats.