AI

Aurora Photography Meets AI: My Workflow for Capturing and Editing the Northern Lights in Alaska

Last February I stood outside my cabin at 2 AM in minus thirty degrees, watching the sky do something that no monitor has ever accurately reproduced. The...

Last February I stood outside my cabin at 2 AM in minus thirty degrees, watching the sky do something that no monitor has ever accurately reproduced. The aurora was rolling in green and violet sheets from horizon to horizon, and my camera was on a tripod capturing thirty-second exposures while my fingers went numb inside two layers of gloves.

I got back inside with 247 RAW files. About forty of them were genuinely good. The rest had star trails from wind vibration, exposure blown by a sudden bright surge, or composition issues from the simple fact that you can't see much through a viewfinder in the dark when your eyelashes are frosting over.

Retrieval Augmented Generation with Node.js: A Practical Guide to Building LLM Based Applications

Retrieval Augmented Generation with Node.js: A Practical Guide to Building LLM Based Applications

Build LLM apps with Node.js. Master RAG, vector embeddings, and hybrid search. Practical examples for scalable AI applications. For developers.

Learn More

That was the night I started building an AI-assisted post-processing workflow that has since changed how I handle aurora photography entirely. Not because AI replaces the craft — it doesn't — but because it handles the tedious filtering and correction work that used to take me an entire weekend for a single shoot.

I'm a software engineer who's been building things for over thirty years. I run Grizzly Peak Software from a cabin in Caswell Lakes, Alaska. I'm not a professional photographer. But living where the aurora is visible roughly 240 nights per year means I've taken a lot of photos of it, and I've learned both the photographic and the computational sides of making those photos look like what my eyes actually saw.


The Problem With Aurora Photography

Here's what nobody tells you about shooting the northern lights: the camera lies. It lies in both directions.

Long exposures on a modern sensor will pick up aurora colors your eyes can't see — deep reds and blues that are genuinely there but below the threshold of human rod and cone sensitivity. That's a pleasant lie. The unpleasant lie is that the camera also flattens the dynamic motion, blows out the brightest bands, and introduces noise artifacts that make the subtle curtain structures look like watercolor smears.

The gap between what you saw and what the camera recorded is significant. And closing that gap manually in Lightroom or Photoshop for hundreds of images is the kind of repetitive precision work that makes you question your hobbies.

My typical shooting session in peak aurora season produces 150-400 RAW files over two to four hours. Of those, maybe 15-20% are keepers based on composition and exposure alone. Of the keepers, each one needs noise reduction, color correction, and usually some selective adjustment to recover detail in the brightest bands without crushing the darker curtain structures.

That's where the AI workflow comes in.


Step 1: Automated Culling With Image Classification

The first thing I built was a simple Node.js script that uses a vision model to classify aurora photos into three buckets: keep, maybe, and reject. This replaced about two hours of manual scrolling through thumbnails.

var fs = require('fs');
var path = require('path');
var { execSync } = require('child_process');

function classifyAuroraShots(directory) {
  var files = fs.readdirSync(directory)
    .filter(function(f) { return /\.(jpg|jpeg|tif|tiff)$/i.test(f); });

  var results = { keep: [], maybe: [], reject: [] };

  files.forEach(function(file, index) {
    var filePath = path.join(directory, file);
    console.log('Classifying ' + (index + 1) + '/' + files.length + ': ' + file);

    // Convert to base64 for vision API
    var imageData = fs.readFileSync(filePath);
    var base64 = imageData.toString('base64');

    var classification = callVisionAPI(base64, {
      prompt: [
        'Classify this aurora photograph. Consider:',
        '1. Is the aurora clearly visible with defined structure?',
        '2. Is the image sharp (no motion blur or star trails)?',
        '3. Is there interesting foreground composition?',
        '4. Is the exposure reasonable (not severely blown or underexposed)?',
        'Respond with exactly one word: KEEP, MAYBE, or REJECT.'
      ].join('\n')
    });

    var bucket = classification.trim().toLowerCase();
    if (results[bucket]) {
      results[bucket].push(file);
    } else {
      results.maybe.push(file); // Default to maybe if unclear
    }
  });

  return results;
}

The vision API call itself is straightforward — I'm using Claude's vision capability for this because it handles the nuance better than a simple image classifier. A blurry aurora shot and a sharp one of a faint aurora look similar to basic classifiers, but a model that understands the context of "is this a good northern lights photograph" makes genuinely useful distinctions.

The classification isn't perfect. It rejects maybe 5% of shots I'd have kept, and keeps maybe 10% I'd have rejected. But it cuts my manual review time from two hours to about twenty minutes, because I only need to scan the "maybe" pile and spot-check the rejects.


Step 2: Exposure and Noise Analysis

Once I have my keepers, the next step is figuring out what each image needs. Aurora shots have wildly variable characteristics even within the same session — a bright coronal burst needs different processing than a faint diffuse glow.

I wrote a script that extracts EXIF data and analyzes the histogram to generate a processing profile for each image:

var sharp = require('sharp');

function analyzeAuroraExposure(imagePath) {
  return sharp(imagePath)
    .stats()
    .then(function(stats) {
      var channels = stats.channels;

      // Green channel dominance indicates strong aurora
      var greenDominance = channels[1].mean / ((channels[0].mean + channels[2].mean) / 2);

      // Check for blown highlights in green channel
      var greenClipping = channels[1].max > 250 ? true : false;

      // Overall brightness assessment
      var avgBrightness = (channels[0].mean + channels[1].mean + channels[2].mean) / 3;

      // Noise estimate from standard deviation in dark areas
      var noiseEstimate = Math.min(channels[0].stdev, channels[1].stdev, channels[2].stdev);

      return {
        file: imagePath,
        greenDominance: greenDominance.toFixed(2),
        hasClipping: greenClipping,
        brightness: avgBrightness.toFixed(1),
        noiseLevel: noiseEstimate > 30 ? 'high' : noiseEstimate > 15 ? 'medium' : 'low',
        suggestedProfile: getSuggestedProfile(greenDominance, greenClipping, avgBrightness, noiseEstimate)
      };
    });
}

function getSuggestedProfile(greenDom, clipping, brightness, noise) {
  var profile = {
    noiseReduction: 'medium',
    greenRecovery: false,
    exposureAdjust: 0,
    highlightRecovery: false,
    contrastBoost: false
  };

  if (noise > 30) profile.noiseReduction = 'heavy';
  if (noise < 10) profile.noiseReduction = 'light';

  if (clipping) {
    profile.highlightRecovery = true;
    profile.exposureAdjust = -0.5;
  }

  if (brightness < 40) {
    profile.exposureAdjust += 1.0;
    profile.contrastBoost = true;
  }

  if (greenDom > 2.0) {
    profile.greenRecovery = true; // Tone down oversaturated green
  }

  return profile;
}

This analysis step is fast — maybe thirty seconds for a hundred images. What it gives me is a starting point for batch processing that's specific to each image's characteristics rather than applying the same preset to everything.


Step 3: AI-Assisted Noise Reduction

This is where things get genuinely impressive. Aurora photography at high ISO in cold conditions produces some of the worst noise you'll encounter in any kind of photography. The combination of high ISO (usually 1600-6400), long exposure (15-30 seconds), and extreme cold (which actually helps sensor noise but hurts battery performance) creates images where the noise is interleaved with real detail.

Traditional noise reduction — Gaussian blur, median filters — destroys the fine curtain structures that make aurora photos interesting. AI-based denoisers handle this dramatically better because they can distinguish between "random sensor noise" and "actual faint detail" in ways that statistical filters cannot.

I experimented with several approaches before settling on a pipeline that uses a pre-trained denoising model through a Python helper script called from my Node workflow:

var { execSync } = require('child_process');
var path = require('path');

function denoiseAuroraImage(inputPath, outputPath, noiseLevel) {
  var strength = {
    'light': 15,
    'medium': 30,
    'heavy': 50
  };

  var denoiseSetting = strength[noiseLevel] || 30;

  // Call Python script that runs the AI denoiser
  var cmd = 'python3 denoise_aurora.py'
    + ' --input "' + inputPath + '"'
    + ' --output "' + outputPath + '"'
    + ' --strength ' + denoiseSetting
    + ' --preserve-stars'  // Important: don't denoise point sources
    + ' --aurora-mask';    // Apply stronger denoising outside aurora regions

  try {
    execSync(cmd, { timeout: 120000 });
    console.log('Denoised: ' + path.basename(inputPath));
    return true;
  } catch (err) {
    console.error('Denoise failed for ' + inputPath + ': ' + err.message);
    return false;
  }
}

The --preserve-stars flag is critical. Stars are point sources of light that look exactly like single-pixel noise to a naive denoiser. Without special handling, you get beautiful smooth aurora in a sky with zero stars, which looks surreal and fake. The AI model I'm using was fine-tuned on astrophotography data, so it understands the difference, but you have to tell it to care.

The --aurora-mask flag tells the denoiser to detect aurora regions and apply lighter noise reduction there while being more aggressive in the dark sky and foreground. This preserves the delicate curtain texture while cleaning up the areas where noise is most visible.


Step 4: Color Correction With Context

This is where I use AI in a way that I think is genuinely novel. The problem with aurora color correction is that "correct" depends on what the aurora actually looked like that night, and cameras don't capture that accurately.

Green aurora from oxygen at 100-300 km altitude looks different from green aurora at lower altitudes. Purple aurora from nitrogen ionization has a specific quality that cameras tend to shift toward blue. And the rare red aurora — the kind you see during major geomagnetic storms — is almost impossible to photograph without either losing it entirely or making it look like the sky is on fire.

I built a tool that takes my processing notes (I keep a voice memo during each shoot describing what I'm seeing) and uses them as context for color correction guidance:

function generateColorCorrectionGuide(imageAnalysis, sessionNotes) {
  var prompt = [
    'I photographed the aurora borealis with these conditions:',
    'Session notes: ' + sessionNotes,
    '',
    'Image analysis:',
    'Green channel dominance: ' + imageAnalysis.greenDominance,
    'Has highlight clipping: ' + imageAnalysis.hasClipping,
    'Average brightness: ' + imageAnalysis.brightness,
    '',
    'Based on the described aurora appearance and the camera analysis,',
    'suggest specific color correction values:',
    '1. White balance adjustment (K)',
    '2. Green saturation adjustment (-100 to +100)',
    '3. Magenta/violet adjustment if purple aurora described',
    '4. Red channel recovery if red aurora described',
    'Provide values as JSON.'
  ].join('\n');

  var response = callAIModel(prompt);

  try {
    return JSON.parse(response);
  } catch (e) {
    console.warn('Could not parse color correction response, using defaults');
    return {
      whiteBalance: 3800,
      greenSaturation: 10,
      magentaAdjust: 0,
      redRecovery: false
    };
  }
}

The voice memo step is important. When I'm standing outside at 2 AM watching a major aurora event, I pull out my phone and say things like "bright green curtains with purple lower edges, very fast movement, occasional red flashes near zenith." Those notes, combined with the image histogram data, give the AI enough context to suggest corrections that move the image toward what I actually saw rather than toward some generic "nice aurora" look.


Step 5: Batch Processing Pipeline

The whole workflow comes together in a batch pipeline that takes a directory of RAW files and produces processed outputs:

var fs = require('fs');
var path = require('path');

function processAuroraSession(sessionDir, options) {
  var rawDir = path.join(sessionDir, 'raw');
  var processedDir = path.join(sessionDir, 'processed');
  var notesFile = path.join(sessionDir, 'session-notes.txt');

  if (!fs.existsSync(processedDir)) {
    fs.mkdirSync(processedDir, { recursive: true });
  }

  var sessionNotes = '';
  if (fs.existsSync(notesFile)) {
    sessionNotes = fs.readFileSync(notesFile, 'utf8');
  }

  console.log('Step 1: Classifying images...');
  var classified = classifyAuroraShots(rawDir);
  console.log('Keep: ' + classified.keep.length +
    ', Maybe: ' + classified.maybe.length +
    ', Reject: ' + classified.reject.length);

  // Save classification results
  fs.writeFileSync(
    path.join(sessionDir, 'classification.json'),
    JSON.stringify(classified, null, 2)
  );

  var keepers = classified.keep.concat(classified.maybe);

  console.log('\nStep 2: Analyzing exposures...');
  var analyses = [];
  keepers.forEach(function(file) {
    var analysis = analyzeAuroraExposure(path.join(rawDir, file));
    analyses.push(analysis);
  });

  console.log('\nStep 3: Denoising...');
  analyses.forEach(function(analysis) {
    var outputPath = path.join(processedDir, 'denoised-' + path.basename(analysis.file));
    denoiseAuroraImage(analysis.file, outputPath, analysis.noiseLevel);
  });

  console.log('\nStep 4: Generating color corrections...');
  analyses.forEach(function(analysis) {
    var corrections = generateColorCorrectionGuide(analysis, sessionNotes);
    var correctionFile = path.join(processedDir,
      path.basename(analysis.file, path.extname(analysis.file)) + '-corrections.json');
    fs.writeFileSync(correctionFile, JSON.stringify(corrections, null, 2));
  });

  console.log('\nProcessing complete.');
  console.log('Review corrections in: ' + processedDir);
}

The pipeline is deliberately not fully automated in the final stages. It generates color correction suggestions as JSON files rather than applying them automatically, because the last mile of aurora editing is genuinely artistic and I don't want a machine making those decisions. The AI gets me 80% of the way there in a fraction of the time. The remaining 20% is where my judgment matters.


Results and Honest Assessment

I've been running this workflow for about a year now. Here's what I've learned.

The culling step saves the most time. Going from 300 images to 60 candidates in ten minutes instead of two hours is transformative. The accuracy is good enough that I've only found three genuine keepers in the reject pile across maybe a dozen sessions.

The noise reduction is genuinely better than what I was doing manually. AI denoisers trained on astrophotography data understand the difference between stars and noise in a way that slider-based tools don't. My images are cleaner and retain more detail than they did before.

The color correction guidance is the weakest link. It's helpful as a starting point, but about half the time I override the suggestions significantly. Aurora color is subjective — what looks "correct" depends on whether you're optimizing for scientific accuracy, visual impression, or the specific aesthetic you're going for. The AI doesn't know which of those I want for any given image.

The thing I didn't expect: the workflow made me a better photographer. Because the AI handles the tedious editing work, I spend more time thinking about composition and timing during the actual shoot. I'm getting better keepers per session because I'm not dreading the processing afterward.


Hardware Notes for the Curious

My aurora shooting setup is nothing exotic: a Nikon Z6 II (excellent high-ISO performance), a 14-24mm f/2.8 lens, and a carbon fiber tripod that doesn't shrink in the cold as badly as aluminum. I shoot RAW exclusively — never JPEG — because the latitude in a 14-bit RAW file is essential for recovering aurora detail.

The processing runs on a fairly modest workstation. The AI denoising step is the bottleneck — about 45 seconds per image on a GPU, or 3-4 minutes on CPU. For a typical session of 60 keepers, that's 45 minutes of denoising time on GPU. I usually start it before bed and check results in the morning.

The total storage requirement for a season of aurora photography is substantial. Each RAW file is about 50 MB, and a busy week might produce 1,000 shots. I keep everything on a NAS with redundant storage, because losing aurora photos from a once-in-a-decade geomagnetic storm would be genuinely painful.


What I'd Build Next

If I had more time, I'd build two things.

First, a real-time aurora intensity predictor that combines magnetometer data from the University of Alaska Fairbanks Geophysical Institute with local weather conditions to tell me whether it's worth going outside. Right now I check the Kp index manually and look out the window, which is the kind of process that begs for automation.

Second, a timelapse assembly tool that uses the classification and color correction data to automatically select the best frames and apply consistent processing across a sequence. Aurora timelapses are spectacular, but keeping color and exposure consistent across hundreds of frames is currently a manual nightmare.

Both of those projects are sitting in a "someday" folder on my machine. Living in Alaska means there's always another aurora season coming.


The Bigger Point

This workflow is a specific instance of a general pattern: using AI to handle the mechanical parts of a creative process so you can focus on the parts that require human judgment. The AI doesn't make my aurora photos better in any artistic sense. It makes the process of getting from RAW files to finished images faster and more consistent, which means I spend more time doing the parts I enjoy — standing outside in the cold, watching the sky move.

If you're doing any kind of repetitive image processing — not just aurora photography, but product photos, real estate photography, event coverage — there's probably a version of this workflow that would save you significant time. The tools are accessible, the APIs are affordable, and the quality is genuinely good enough for professional use.

Just keep a human in the loop for the final decisions. The AI is excellent at the mechanics, but it doesn't know what you were feeling when you took the shot.

Shane Larson is a software engineer and the founder of Grizzly Peak Software. He writes code and photographs the aurora from his cabin in Caswell Lakes, Alaska. His book on training and fine-tuning large language models is available on Amazon.

Powered by Contentful