Est.
Development Tooling
VitePress Development
VitePress development is straight forward. I run npm run dev
with the definition shown in the VitePress jotting. At the beginning, I also used the build definition vitepress build docs
. Most of my additions where implemented as Vite plugins. Therefore they where automatically executed by vite during the normal build. But when I introduced service workers and updates, I extended the build with a small shell script called blog
.
This script calls the original VitePress build, but also generates the /version.txt file and the caching service-worker. The script is run when I invoke npm run build
. And this invocation is also used for the build at Cloudflare Pages. Therefore the Cloudflare build also always generates a fresh version information and a fitting service worker.
There are some steps in a complete build that take some time and should not too often burden builds during development. Thus, blog
is only one step of the bigger picture. Before a release, I run a prepare
script that also checks for outdated npm packages, generates the timestamps and a sitemap. It also runs the consistency checks and validated external links.
Serverless Development
I run serverless functions as Cloudflare workers. Cloudflare has support for workers in their dashboard. There is also a CLI named wrangler
. It supports local development of workers, automatically detects source code changes and restarts new versions. It also supports running of the development versions on cloud resources and the upload to the production environment. I had a bit of a hard time figuring out how to provide development versions of the secrets for API_TOKENs and other secrets. The key is to define them in a file named .dev.vars
on your local machine.
Actually I have two workers. One is in charge to handle the feedback form. The other automatically removes old deployments and is covered in the next section.
Cleaning up Old Pages Deployments
If you use Pages, you will find that deployments stack up quickly. The advantage is that you can quickly redeploy old builds as needed. On the other had, even though storage is cheap, I'm not a big fan of wasting resources. Manually deleting old versions via the dashboard is tedious. There is no bulk deletion, only deployment by deployment.
Cloudflare's documentation has examples of how to use the Cloudflare API to delete deployments. Workers are a good choice for implementing such housekeeping. Not only do they handle fetch events, but they can also be triggered by scheduler events, similar to cron jobs. I've defined handlers for both, so I can do spontaneous invocations in addition to the regular schedule.
export default {
fetch(request: Request, env: Env, context: any) {
setEnv(env);
return cleanUp(context.waitUntil.bind(context));
},
scheduled(event: Event, env: Env, context: any) {
setEnv(env);
return cleanUp(context.waitUntil.bind(context));
},
};
export default {
fetch(request: Request, env: Env, context: any) {
setEnv(env);
return cleanUp(context.waitUntil.bind(context));
},
scheduled(event: Event, env: Env, context: any) {
setEnv(env);
return cleanUp(context.waitUntil.bind(context));
},
};
The original examples delete all deployments that are older than 7 days. I decided to keep the n most recent deployments for each branch.
Getting the list of deployments for a project is more annoying than I expected. It's a paged API. I wasn't able to raise the entries per_page
above 25 and – as of now – sorting also didn't seem to be supported on the deployments endpoint. So I had to loop through individual pages.
import { accountAPI, headers } from "./config";
export async function fetchList(endpoint: string) {
let allResults: object[] = [];
let onThisPage;
let json: { result: object[]; result_info: { total_count: number } };
let page = 0;
do {
const url = `${accountAPI}${endpoint}?page=${++page}`;
const fetched = await fetch(url, { headers });
json = await fetched.json();
onThisPage = json?.result ?? [];
allResults = [...allResults, ...onThisPage];
} while (
onThisPage.length &&
allResults.length < json.result_info.total_count
);
return allResults;
}
import { accountAPI, headers } from "./config";
export async function fetchList(endpoint: string) {
let allResults: object[] = [];
let onThisPage;
let json: { result: object[]; result_info: { total_count: number } };
let page = 0;
do {
const url = `${accountAPI}${endpoint}?page=${++page}`;
const fetched = await fetch(url, { headers });
json = await fetched.json();
onThisPage = json?.result ?? [];
allResults = [...allResults, ...onThisPage];
} while (
onThisPage.length &&
allResults.length < json.result_info.total_count
);
return allResults;
}
Deleting the deployments that are matched by a cleanup rule than works as expected: with the HTTP DELETE
method on the deployment's URL.
project.allDeployments
.filter((deployment) => deployment.cleanupRule)
.forEach(async (deployment) => {
const deleteURL = deploymentURL(project.name) + "/" + deployment.id;
console.log("DELETE " + deleteURL);
waitUntil(
fetch(deleteURL, {
method: "DELETE",
headers,
}).then((response) => handleErrors(response, deployment))
);
});
project.allDeployments
.filter((deployment) => deployment.cleanupRule)
.forEach(async (deployment) => {
const deleteURL = deploymentURL(project.name) + "/" + deployment.id;
console.log("DELETE " + deleteURL);
waitUntil(
fetch(deleteURL, {
method: "DELETE",
headers,
}).then((response) => handleErrors(response, deployment))
);
});
The key here is to tell the worker environment to wait until the async functions also complete. Otherwise the worker ends and all outstanding requests get killed.
More Cloudflare API
Triggering builds by gitlab pushes is not bad. Sometimes it feels a bit oversized to me, being the only developer on my main branch 😉. "Integration Testing" typically happens on my local machine. For me, not every push deserves a build. As an alternative trigger, I set up webhooks for all environments. Triggering the deployment now is a simple curl on that URL.
Together with another API call that reads the build log, I can track the build and see when and how it finishes.
The core of the script is this:
deploy() {
local WEBHOOK=$1 local start=${2:-no}
local id
if [ "${start}" = "yes" ] ; then
id=$(curl -s -X POST "https://api.cloudflare.com/client/v4/pages/webhooks/deploy_hooks/${WEBHOOK}" | jq -r ".result.id")
sleep 45
else
id=$(curl -s "https://api.cloudflare.com/client/v4/accounts/${account_id}/pages/projects/${project_id}/deployments?per_page=1" -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" | jq -r ".result[0].id")
local logURL="https://api.cloudflare.com/client/v4/accounts/${account_id}/pages/projects/${project_id}/deployments/${id}/history/logs"
local last=""
local waiting=1
echo "${id}"
while [ $waiting -eq 1 ] ; do
msg=$(curl -s "${logURL}" -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" | jq -r ".result.data[.result.total-1].line")
if [ "${msg}" != "${last}" ] ; then
last="${msg}"
echo "${last}"
if [[ "${last}" == "Success: Your site was deployed!" ]] || [[ "${last}" == Failed:* ]] ; then
waiting=0;
else
sleep 15
fi
fi
done
}
deploy() {
local WEBHOOK=$1 local start=${2:-no}
local id
if [ "${start}" = "yes" ] ; then
id=$(curl -s -X POST "https://api.cloudflare.com/client/v4/pages/webhooks/deploy_hooks/${WEBHOOK}" | jq -r ".result.id")
sleep 45
else
id=$(curl -s "https://api.cloudflare.com/client/v4/accounts/${account_id}/pages/projects/${project_id}/deployments?per_page=1" -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" | jq -r ".result[0].id")
local logURL="https://api.cloudflare.com/client/v4/accounts/${account_id}/pages/projects/${project_id}/deployments/${id}/history/logs"
local last=""
local waiting=1
echo "${id}"
while [ $waiting -eq 1 ] ; do
msg=$(curl -s "${logURL}" -H "Authorization: Bearer ${CLOUDFLARE_API_TOKEN}" | jq -r ".result.data[.result.total-1].line")
if [ "${msg}" != "${last}" ] ; then
last="${msg}"
echo "${last}"
if [[ "${last}" == "Success: Your site was deployed!" ]] || [[ "${last}" == Failed:* ]] ; then
waiting=0;
else
sleep 15
fi
fi
done
}