Per my last post on the subject, I went ahead and smashed out a small script to organize, thumbnail and sync images to a small FreeBSD VPS running nginx for the sole purpose of flinging files out, then stuck it on a subdomain and put Cloudflare in front of it too.

I edited a couple of posts, and put photos in them - finding it (for the most part) much more convenient than uploading to imgur or anything like that. I don't have to worry about imgur changing policies in ten years and invalidating links (a frustratingly huge part of the web right now, thanks to Photobucket), and it's not dogshit slow like Flickr has become. I still haven't decided whether the Nginx thing is permanent, or I may stuff all the images on an S3 bucket instead. The good thing about the way my site is structured right now is changing from one host to another is just a single large rsync (or s3cmd sync, as the case may be) away - and I can do that from the VPS if I have to, so changing will be pretty painless except in the most dire of circumstances.

The results are pretty good - I'm not sure if it helps, but I stuffed a couple of DNS prefetch tags into the top of my HTML and on perfect cache hits I have entire page loads under half a second, and most of that is still Google Maps. I'm thinking for Google Maps, if I leave the copyright in-tact, and link back to the page, I can probably mirror the static images and be okay, but scripting it is a whole lot harder than scripting the images so I'm unsure of whether I want to bother yet, at this point I'm just making sacrifices on the altar of website speed for no good reason.

There is one more thing I could do to speed my site up, and that's to concatenate all my CSS and Javascript into single files, which would turn 8 requests into two (in theory), and since (well, it used to be that) the average browser will manage about 8 parallel requests at once (and that's pretty optimistic I think, though I haven't kept up with what browsers are doing these days), that could theoretically slash my load times further, so it may be worth looking into. I originally kept them separate because in many cases (jQuery, Bootstrap, and so on) I'm using CDNs, so I could potentially benefit from folks' caches, but I don't know whether this is worth worrying about and in the worst case it adds extra DNS lookups.

If you're interested, here's the little script I knocked together to generate thumbnails - it does tie me to a single box if I want to include images, but I think that's a limitation I can live with. It doesn't pollute my Git repository with massive binaries, and I'll be maintaining at least two copies of every image so I think it'll work. Anyway, here's the script:

#!/usr/local/bin/bash

input=Pictures
output=output/images
tsize=160x160

# Stop script on first error
set -e

# Make sure output directory exists
thumbs=${output}/thumbnail
mkdir -p ${thumbs}

find "${input}" \( -iname '*.png' -o -iname '*.jpg' \) -print | while read f
do
        file_base=${f#${input}/}
        file_original=${input}/$file_base
        file_thumb=${thumbs}/$file_base
        file_dir=$(dirname "$file_base")

        # Skip if the thumbnail already exists
        if [ ! -f "$file_thumb" ]; then
                mkdir -p "${output}/$file_dir" "${thumbs}/$file_dir"
                echo "Processing: $file_original"

                # Copy original image, and strip any geotag from it
                cp "$file_original" "${output}/$file_dir"
                exiftool -q -overwrite_original -gps:all= -makernotes:all= "${output}/$file_base"

                # Now create a thumbnail of the desired size
                convert -thumbnail "$tsize" "${output}/$file_base" "${thumbs}/$file_base"
        fi
done

It's quite fast, taking around 2 seconds to add one new image on an ancient Athlon x2 250, and it doesn't bother regenerating a thumbnail if one already exists - it takes longer for rsync to do its thing than it does for the script to work in most cases.

Update (2017-10-25): I discovered that with a handy plugin called assets, Pelican can use the webassets python package to minify CSS and JS after concatenating them into a single file each, so I went ahead and installed that, and then pushed all ~850 files to S3. I then noticed that they were using relative, not absolute URLs, so I had to push all ~850 files again. RIP my POST request count for S3 for this month.

I'm pretty happy with the results: as forecast, I turned 8 requests into two, and possibly saved a few extra bytes due to a couple less gzip headers. More importantly, it appears to have broken nothing.

Horsham, VIC, Australia fwaggle

Published:


Modified:


Filed under:


Location:

Horsham, VIC, Australia

Navigation: Older Entry Newer Entry