Blog / Gulp

Static site generators are great for blogging. They allow working offline, using a real editor to write posts and produce HTML sites that can be hosted easily.

I’m not a big fan of web applications with crappy WYSIWYG editors that cannot be used offline in combination with Markdown and git. To make things worse, webapps are often slow, inject tons of unneeded stuff in websites, and open up a large attack surface. Therefore, I never really considered using Wordpress, for example.

Over the last years, I tried several blogging frameworks, migrating from Octopress, to Pelican, to Hexo. While they seem great on a first look, I always had problems in the long run. With Python, the dependencies are a mess (also with pip and virtualenv). And Hexo—I tried really hard to love it, but the code is just not intuitive to me and the code quality (especially of the plugins) is not too good.

Last week I was annoyed enough to rework my blog and, finally, got rid of blogging frameworks all together. I migrated to using Gulp with Nunjucks templates, Markdown, and some custom functions. In other words, I programmed a minimal blog with Gulp, using about 550 lines of code.

The code to create posts, for example, looks like this:

gulp.task('posts', function() {
    return gulp.src('content/_posts/*.md')
        .pipe(frontmatter({property: 'frontmatter',
                           remove: true}))
        .pipe(extractMetadata())
        .pipe(renderPage())
        .pipe(extractOpenGraph())
        .pipe(renderTemplate('post'))
        .pipe(gulp.dest('dist'));
});

It searches for all files in the posts directory, parses their front matter (which includes title and tags), and extracts metadata like the URL and the date from the file name. The resulting buffers are rendered in two steps; first the post’s content, then the whole HTML file.

The process is split, since the post’s content has to be rendered in multiple contexts, like in the page of the post and the blog index.

Also things like the index or the archives are easily generated.

gulp.task('archive', function() {
    return gulp.src('content/_posts/*.md')
        .pipe(frontmatter({property: 'frontmatter',
                           remove: true}))
        .pipe(extractMetadata())
        .pipe(archive())
        .pipe(renderTemplate('archive'))
        .pipe(debug())
        .pipe(gulp.dest('dist'));
});

The trick is to first read all posts and record their metadata (archive()) and used it as context when rendering the template (renderTemplate('archive')).

var archive = function() {

    var posts = [];

    function bufferContents(file, enc, cb) {
        var p = {
            title: file.title,
            url: file.url,
            date: file.date
        };
        posts.push(p);
        cb();
    };

    function endStream(cb) {
        posts.sort(function(a, b) {
            if(a.date < b.date) return 1;
            if(a.date > b.date) return -1;
            return 0;
        });
        var f = new File({
            cwd: '',
            base: '',
            path: 'archive/index.html'
        });
        f.posts = posts;
        this.push(f);
        cb();
    };

    return through.obj(bufferContents, endStream);
};

Awesome :-) Currently, I’m very happy. Let’s see if that also leads to problems in the long run. But, actually, I think its a pretty good approach. Given the fact that I don’t configure a blogging framework, but program a bare minimum one, I can adapt everything to my needs.

Furthermore, it relies on popular, basic libraries that are unlikely to be turned down in the next months when the n+1-th site generator is released.