Now and then I see people claiming it as a virtue that they don’t “do anything” to their photos after they come out of the camera.
Like most religious wars, neither side is totally lacking in value.
Ansel Adams said “Yes, in the sense that the negative is like the composer’s score. Then, using that musical analogy, the print is the performance.” (Sheff, David (1 May 1983), Playboy Interview: Ansel Adams, p. 226) With film, the negative doesn’t look anything like the final image (true enough in black and white, far more true in color), and everybody understood there were decisions required to get from there to a print. Everybody knew it was very important who did your printing (and many people did their own printing in the darkroom, at least until they got successful enough to hire a first-rate printer to work for them).
The actual data recorded by a digital sensor is even less like an image that could mean anything to a human—the Bayer filter array records each pixel in one color, and those are then combined with complicated algorithms to make a color image of the result. But software to do that is built into our cameras, so it’s possible for people who are ignorant of that (or just haven’t thought about it much) to have the idea that the JPEG that comes out of their camera is what the sensor actually captured and any change from that is a distortion.
Okay, it’s time to strip off a couple of important but unusual branches of photography explicitly. In scientific, forensic, and probably some other specialized branches of photography one is basically recording data rather than making images. The photographer needs to be prepared to argue (or formally testify) that the image accurately represents reality in the specific ways that matter (and may wildly distort other aspects of the imaging if necessary to make the thing they’re illustrating more clearly visible in the images). These are extremely valuable, important, areas of photography, but they don’t participate in stupid arguments about “altering” photos; they have clear formal standards and principles, and they’ve had them for many decades. This argument isn’t relevant to them.
The experience of actually being in a place, seeing it with your own eyes, is complex and largely synthetic (built up in your brain). We see full resolution with only a small portion of the retina, but the way the eyes move around and the brain treats the results makes us feel like we perceive precisely everywhere. Our eyes can see detail in a vastly greater brightness range than cameras can, and that also enters into our perception. It’s usually desirable to print camera images so they resemble what we think we see, rather than what we actually see at any one point in the scene.
One of the most obvious areas is white balance. Leaving the camera on “auto” white balance somewhat masks the issue (letting the camera make what is essentially a post-processing decision in digital), but it’s quite common to want to make the color captured outdoors in open shade look much more like the color captured indoors under incandescent light than it actually is. This can be done more accurately in post-processing generally than by relying on “auto”.
Every single scene you stand in front of doesn’t look best cropped to 4:3 (or 3:2 or 6:7 or 16:9 or 8:10). Making a fetish of “not cropping” (by which people mean cropping only in the camera and hence always using that exact aspect ratio) will make your work less good. (With miniature cameras, especially 35mm and smaller, and early digital captures, cropping too much could highlight technical issues (grain, etc.), and it was good practice to try to frame tightly to minimize cropping later, yes. But that’s “try” and “minimize”, not make a fetish of completely avoiding.) Mind you, the look of prints with a natural black border from an oversize negative carrier really is rather nice.
When photographing a static scene with a long span in time, it’s useful to make precisely-exposed images that capture the scene as well as possible. This saves some time later, and results in better technical quality (often just very slightly better, but sometimes vastly better). And old slide photographers who projected their original slides (that is, amateurs) were very aware that there was no option for them to adjust exposure later, or cropping either. However, those of us who photograph dynamic scenes, perhaps with changing lighting as well, and us moving through them constantly would frequently miss our best shots if we took the time to get perfect exposures each time. Yes, as we get better we judge the exposure better by eye, and just work faster, and that has technical benefits to the work that are not to be despised; but the ability to shoot quickly is very often the difference between a brilliant picture and a useless picture in dynamic situations, and using post-processing to make that brilliant but technically somewhat under-exposed picture look excellent is IMHO entirely appropriate.
Making exhibition-grade images often involves a lot of local brightness adjustment–”dodging” and “burning”, deliberate “vignetting” (or correcting corner falloff in your lenses; the opposite of vignetting), gradient filters (and people used split neutral density filters on their lenses when taking film photographs sometimes, especially slides, because of the narrow brightness range they could capture), and so forth. Darkening areas to suppress distracting detail, creating brightness patterns to lead the eye, emphasizing the important features. Pablo Inirio was the master printer of Magnum Photos for decades, and Magnum has released a few of his annotated workprints for printing important photos.