Deploying This Site with IPFS and Scheme


I build and deploy this website using a slightly unorthodox set of technologies:

I chose these tools for what I believe to be practical reasons. Guix is the package manager for my distribution (also called guix) and has a lot of neat features built around creating reproducible environments. Haunt is a static site generator written in guile scheme, which allows me to reuse tooling that I've set up to manage my guix system. I primarily use sourcehut to store my repositories, but it unfortunately does not include a static site hosting service, so I decided to deploy my site to IPFS because its shiny and I did not want to manage a server.

Setting Up The Environment With Guix

The three of you who read my inaugural post know that I nurture an unhealthy infatuation with the guix package manager. Briefly, it is like nix except written in guile scheme and committed to only adding free software to its official repositories. I personally believe it has a more intuitive interface than nix and feature parity where it matters most (yak-shaving).

The first thing I created for this project was a guix manifest. The file is called guix.scm and I activate it using the command guix environment --ad-hoc -m guix.scm. This is similar to creating a shell.nix file and running the command nix-shell in that guix environment spawns a subshell that includes the packages described in the manifest and does not pollute the user's package profile, like a generalized nvm. One can also activate these shells automatically and export their environment to an editor of choice using direnv.

;; guix.scm
(use-modules (gnu packages))

Building A Static Website With Haunt

Haunt is a simple static site generator that allows authors to treat their websites as guile scheme programs. It includes a shell utility with two commands: haunt build and haunt serve. The former converts markdown (and more) files to html and the latter sets up a server primarily for local development.

Haunt provides several procedures for declarative site generation. These include procedures for creating site metadata, feeds, post templates, and static resource loaders, standard shit.

;;; a site declaration procedure
(site #:title "Elais Codes"
      #:domain ""
      '((author . "Elais Player"))
      #:readers (list commonmark-reader)
      #:builders (list (blog #:theme elais-theme
                             #:collections %collections)
                       (static-directory "static/fonts" "fonts")
                       (static-directory "static/css" "css")))

Posts and pages are templated with SXML, which is an alternative syntax for XML that uses S-expressions. Since HTML is practically a subset of XML, this allows the site's templates to be embedded directly in scheme code using backquotes.

;; the backquote (`) character signals that in the expression that
;; follows, every subexpression proceeded by a comma is to be
;; evaluated, and every subexpression not proceeded by a comma
;; is to be quoted or read without evaluation.
(define (post-template post)
  `((div (@ (class "headline"))
         (h1 (@ (class "post-title"))
             (a (@ (href ,(prefix-url "")))
                ,(post-ref post 'title)))
         (div (@ (class "taglist"))
              ,(tags->links (post-ref post 'tags))))
    (div (@ (class "post"))
         (div (@ (class "date"))
              ,(date->string (post-date post) "<~Y-~m-~d>"))
         ,(post-sxml post))
    (center "---")))

Generating The Site

At the time of writing, this site's content is written in markdown. Each article is stored in the ${PWD}/posts directory. Once its time to publish a new version of this site I run haunt build and barring any errors this generates a static site that is stored in ${PWD}/site. During development I run haunt serve -w which serves the static content and adds a watcher for changes, it is very lightweight and updates the content in no time.

Deploying with IPFS

IPFS is an adolescent peer-to-peer protocol for storing and sharing data in a distributed file system. I use it primarily because sourcehut does not host static websites like github or gitlab and I wanted to play with something new. Making an IPFS deployed site accessible to the world wide web requires two services, one for pinning and another for DNS resolution.

Pinning a Static Website

Content addresses in IPFS are immutable and will always find data if said data is still on a node in the network. However data on nodes are treated as a cache by default and when the node fills up it runs a garbage collector, emptying the its cache to make room for more data. "Pinning" tells a node that the data it is hosting is important and should not be thrown away during garbage collection. So to make sure my deployed site is always available and doesn't get garbage collected I use a pinning service called Pinata.

Pinata "pins" my content, which shifts the burden of maintaining and monitoring an ipfs node to them and their highly available public nodes rather than keeping it on me and having to manage an always on machine. Though I have to sign up to use their service, the api is dead simple to use and they don't charge users until they reach 1 GB of pinned content. Since my site weighs in at ~90kb and won't be growing much anytime soon it's going to be a while before I hit that limit.

# pinata's api environment variables that I store in a .env file

Bridging IPFS and the World Wide Web

A DNS gateway is needed to serve IPFS content on the world wide web (i.e from Cloudflare provides just that for IPFS users in the form of, a portal to content stored in ipfs nodes. To access this from my website's url, all I have to do is add a CNAME record for that points to my site's hash on ipfs prefixed by and a TXT record for the _dnslink.

# partial DNS configuration
TXT   __dnslink   <ipfs-CID>

Once this propagates my site hosted on ipfs becomes available to users on the world wide web. Since I'm also using cloudflare, I can add SSL/TLS to my domain using their tools, which is something we should always do.

# .env


So far we have created, built, and set up hosting platforms for Now we need to deploy the damn thing. To do this we use a nodejs utility called ipfs-deploy, which has been hinted at in some of the previous code blocks. ipfs-deploy only requires a .env with the correct api tokens and credentials to work and is a one liner.

# command to use ipfs-deploy
npx ipfs-deploy -p pinata -d cloudflare -O site/

It depends on nodejs, python and gcc to work completely, as it requires native extensions. I use the npx command so that ipfs-deploy runs without actually installing itself on my machine, something I find important as I do not like polluting my user environment.

This is a one and done command, and after it is run my site is pinned on pinata and my DNS is updated to point to the correct IPFS address as proxied through cloudflare's ipfs gateway. It takes about a minute for the new site to become available.

Putting it all together

The steps for adding content, building, and deploying my site are fairly straightforward

To save time, I encapsulate the steps in a Makefile

    guix environment --ad-hoc -m guix.scm -- haunt build

publish: build
    guix environment --ad-hoc -m guix.scm -- npx ipfs-deploy -p pinata -d cloudflare -O site/

I use guix environment to run the commands in a subshell with all of their dependencies. I'm not sure if I can run guix environment once and then exit after the build is finished, I've never tried and this works so I don't care. The time it takes to create the subshell is trivial.


There are a few things I would like to add in the future. For instance, I would like to write my posts using org-mode. Like most people who uses the eldritch horror that is emacs for writing prose in addition to code, org-mode is my first choice when editing text. I have pandoc listed as a dependency but haven't actually cared enough to write the script to take a bunch of org-mode files and convert them to markdown as an intermediate step before running haunt build.

Also you'll notice that I use two proprietary platforms --pinata and cloudflare-- as part of my deployment process. Proprietary platforms are not ideal and can represent a threat to user privacy and security. I'd like to look into platforms that are more free but also don't carry the burden of self-hosting, but for this project I went with what worked. This is not a solution for those who strictly adhere to the idea of software freedom.