Disclaimer: This is probably an overkill for such task, but I’m currently learning Rust and wanted to write a small program to do the job.
I was coding my photo gallery this weekend and I had to extract EXIF data somehow programatically and include them in the individual photo pages. As in the other parts of my website, I also wanted to use the Astro Content Collections, so there is a route for every photo with detailed information as the EXIF and possibly some comments and other stuff.
Problem Description: Creating a bunch of MDX files with the same names as the images/photographs and put the EXIF, metadata and content inside them automatically in a directory.
So, I have a lot of photos and I need an .mdx file for every one of them in a directory. The photos are already processed -resized and converted to .webp- and being hosted on my server. Therefore I am not processing them using the Astro’s image processing features. I could do it and push everything in my Astro deploy process, but I am hosting my photos on my own server and this website is still on the Netlify’s free plan. Soon I will migrate it to my server too when I have time.
Anyway, below is the code where I use the rexiv2 crate to generate the necessary files with the required EXIF data inside them in a formatted manner.
exif-to-md.rs
use chrono::NaiveDateTime; //for formatting time and date
use rexiv2::Metadata;
use std::fs::{self, File};
use std::io::Write;
fn main() -> std::io::Result<()> {
let folder_path = "./photos";
// using MDX in Astro, so the files should be .mdx
let markdown_extension = ".mdx";
// initialize Rexiv2 crate
let _ = rexiv2::initialize();
// read the photos directory
for entry in fs::read_dir(folder_path)? {
let entry = entry?;
let path = entry.path();
// check if the entry is a file
if path.is_file() {
if let Some(file_stem) = path.file_stem() {
let file_stem = file_stem.to_string_lossy().trim().to_lowercase();
let markdown_file_path =
format!("{}/{}{}", folder_path, file_stem, markdown_extension);
// extract EXIF data if non-existent -> default is an empty string
let mut date_taken = "".to_string();
let mut camera_model = "".to_string();
let mut camera_make = "".to_string();
// content of the files to match my Astro component structure
let md_content = r#"
import {Picture} from 'astro:assets';
<picture>
<source srcset={`https://img.kenan.fyi/photos/originals/${frontmatter.title}_1920px.webp`} type="image/webp"/>
<img src={`https://img.kenan.fyi/photos/originals/${frontmatter.title}_1920px.jpg`} alt="" width="2880" height="1200" loading="lazy" decoding="async"/>
</picture>
"#;
// load the metadata using rexiv2
if let Ok(metadata) = Metadata::new_from_path(&path) {
// get the date
if let Ok(value) = metadata.get_tag_string("Exif.Image.DateTime") {
// parse the date from YYYY:MM:DD HH:MM:SS to YYYY-MM-DD
if let Ok(parsed_date) = NaiveDateTime::parse_from_str(&value, "%Y:%m:%d %H:%M:%S")
{
date_taken = parsed_date.format("%Y-%m-%d").to_string();
}
}
// get the camera
if let Ok(value) = metadata.get_tag_string("Exif.Image.Make") {
camera_make = value;
}
// get the camera model
if let Ok(value) = metadata.get_tag_string("Exif.Image.Model") {
camera_model = value;
}
}
// create the markdown file and write to it
let mut file = File::create(&markdown_file_path)?;
writeln!(file, "---")?;
writeln!(file, "draft: false")?;
writeln!(file, "title: \"{}\"", file_stem)?; // Title
writeln!(file, "shotDate: {}", date_taken)?; // EXIF date taken
writeln!(file, "camera: \"{} {}\"", camera_make, camera_model)?; // EXIF camera model
writeln!(file, "---\n")?;
writeln!(file, "{}", md_content)?;
println!("Created: {}", markdown_file_path);
}
}
}
Ok(())
}
So, what it does basically is the following:
It is a one-time solution for a specific problem, but as I said, it is a small practice in Rust and an easy way to create multiple files I needed with specific data in them. I would probably implement a better program (maybe a CLI tool?) to streamline the whole process, because this just solves the .mdx creation problem of a photo. The resizing, processing and upload of them are another sections of the whole process, which I have not mentioned.
Maybe I will post about them later too. Cheers.
I was trying to run a MediaWiki instance on my dummy server and wanted to serve it using the Caddy this time. Apache and Nginx configurations are fairly straightforward and easy to find, but Caddy -since it’s fairly new- could be difficult if you are trying to do some non-standard stuff. As I have successfully executed it with the desired configuration, I wanted to document my observations.
Configuration has two parts, one being the standard Caddyfile which I really like to work with, and the other one being LocalSettings.php, which is the default setup file for MediaWiki instances.
Serving a PHP application through Caddy is straightforward, but I wanted to configure my installation with the so-called ShortURLs. This requires some setup in the Caddyfile:
Caddyfile
your.domain{
@title {
not file {
try_files {path} {path}/
split_path .php
}
path_regexp title ^/(.*)$
}
rewrite @title /index.php?title={re.title.1}&{query}
root * /var/www/html/mw #this might be mediawiki for you
php_fastcgi unix//var/run/php/php-fpm.sock # serving php applications
file_server
}
This configuration redirects the ugly MediaWiki URLs to a more structured /wiki/Main_page type. Further explanations are here.
After that you need to edit the LocalSettings.php too:
LocalSettings.php
$wgScriptPath = "";
$wgArticlePath = "/wiki/$1"; #for redirecting to /wiki/
$wgUsePathInfo = true;
Reload the Caddy config:
terminal
caddy reload --config /etc/caddy/Caddyfile
Done.