Netlify Functions are serverless functions that can be versioned, built, and deployed along with the rest of your site. Scheduled Functions take this a step further and allow you to run the functions at certain times using the cron format.
There are a couple of ways to define a Scheduled Function, but we're going to focus on defining it all in the function code. You can see more about this in the Netlify Scheduled Functions documentation.
To define it in the code, we'll need the @netlify/functions
package, so we need to install it:
npm install @netlify/functions
We'll be using the schedule
method from this package, and this method takes 2 parameters:
cron expression
, this is a cron pattern that defines when the Scheduled Function runs. crontab guru can help you build this expression if you want a specific pattern.callback function
, this is the function that will be called.So let's create a basic Scheduled Function that prints "Hello world!" to the logs:
const { schedule } = require("@netlify/functions");
exports.handler = schedule("* * * * *", await () => {
console.log("Hello world!");
return {
statusCode: 200
};
}
In the above example, we require
the @netlify/functions
package and use the destructuring assignment to unpack schedule
from it. We then call schedule
with a cron value and a callback, and then assign that to exports.handler
, which is what Netlify Functions will run.
In the schedule
call, we use * * * * *
as the cron value, which means that it will run every minute. As the callback value, we have a function that calls console.log
to write "Hello world!" to the console and then returns a response object that contains the key statusCode
with the value 200
.
We can then save this file within our Netlify site directory as netlify/functions/hello.js
and when Netlify deploys our site, we'll see "Hello world!" being printed to the logs in the Netlify UI!
So the example above isn't particularly useful, but it gives us a base for building something that can trigger a rebuild of our Netlify site. We can do this with Netlify's build hooks, which is a URL that we can do a POST
request to and tell Netlify to start a build.
I'd recommend that you store the build hook (either entirely or the identifier from the end) in your Netlify environment variables.
To make the POST
request, we can use Node's built in https
module.
const { request } = require("https");
const { schedule } = require("@netlify/functions");
exports.handler = schedule("30 10 * * *", async () => {
await new Promise((resolve, reject) => {
const req = request(
`https://api.netlify.com/build_hooks/${process.env.BUILD_HOOK}`,
{ method: "POST" },
(res) => {
console.log("statusCode:", res.statusCode);
resolve();
}
);
req.on("error", (e) => {
console.error(e);
reject();
});
req.end();
});
return {
statusCode: 200,
};
});
In this example we're still using the schedule
method, but the schedule is now 30 10 * * *
which runs it every day at 10:30, and the callback function uses https.request
to send a POST
request to our build hook.
So now we have something that rebuilds the site once a day, but we already had that with GitHub Actions. Lets make it more specific!
Netlify doesn't deploy any functions until after the build process for your site has completed, and this means that we can generate or modify our Netlify Functions at build time!
First up, we need to separate live posts and future posts so that only the posts that should be live are listed on any pages. To do this, we'll create 2 new collections in our .eleventy.js
: posts
and futurePosts
const now = new Date();
eleventyConfig.addCollection("posts", (collectionApi) =>
collectionApi
.getFilteredByGlob("./src/posts/*")
.filter((post) => post.date <= now)
.reverse()
);
eleventyConfig.addCollection("futurePosts", (collectionApi) =>
collectionApi
.getFilteredByGlob("./src/posts/*")
.filter((post) => post.date > now)
);
Now that we have our collections, it's important to update any pages that list posts to refer to our new posts
collection so that we only show the posts that are aready live.
Next we need to create the file that will generate our Scheduled Function. I've called mine buildFunction.11ty.js
and it's using the 11ty.js
template format.
class BuildFunction {
data() {
return {
permalink: "netlify/functions/build.js",
permalinkBypassOutputDir: true,
};
}
dateToCron(date) {
return `${date.getMinutes()} ${date.getHours()} ${date.getDate()} ${
date.getMonth() + 1
} ${date.getDay()}`;
}
render({ collections, site }) {
const nextYear = new Date();
nextYear.setFullYear(nextYear.getFullYear() + 1);
nextYear.setHours(0);
nextYear.setMinutes(0);
nextYear.setSeconds(0);
const postDates = collections.futurePosts
.map((post) => {
return post.date;
})
.filter((date) => date <= nextYear)
.sort((a, b) => a - b);
postDates.push(nextYear);
return `
const { request } = require("https");
const { schedule } = require("@netlify/functions");
exports.handler = schedule("${this.dateToCron(postDates[0])}", async () => {
await new Promise((resolve, reject) => {
const req = request(
"https://api.netlify.com/build_hooks/${process.env.BUILD_HOOK}",
{ method: "POST" },
(res) => {
console.log("statusCode:", res.statusCode);
resolve();
}
);
req.on("error", (e) => {
console.error(e);
reject();
});
req.end();
});
return {
statusCode: 200,
};
});`;
}
}
module.exports = BuildFunction;
Above is the complete code for this file, but I'll go through the various parts of it
data
methodThe data method allows us to specify the frontmatter data for this file. Here we've set the permalink
attribute to "netlify/functions/build.js"
and the permalinkBypassOutputDir
attribute to true
. This means that Eleventy will build this file to netlify/functions/build.js
, starting from your project root directory.
render
methodIn our render method, we first map
over our futurePosts
collection so that we have an array of dates that posts will be live. Then, as cron doesn't specify a year, we filter the array to only have dates within the next year. Next, we sort the array so that we have the nearest date first. Just in case we don't have any posts due to go live in the next year, we push the date for 1 year from now into the array too.
Finally, we return our function code using a template string. We insert our cron pattern using a dateToCron
method which takes a date and converts it into a cron pattern, and we insert our BUILD_HOOK
environment variable.
An important thing to note is that Netlify uses UTC for times, so if you have 25th December 2022 10:30
in your post and you're expecting it to post at 10:30 in your local timezone, you'll need to convert the dates from your timezone to UTC.
I do this using the zonedTimeToUtc
method from the date-fns-tz
package, with a method like this:
getUTCPostDate(date) {
const padded = (val) => val.toString().padStart(2, "0");
return zonedTimeToUtc(
`${date.getFullYear()}-${padded(date.getMonth() + 1)}-${padded(
date.getDate()
)} ${padded(date.getHours())}:${padded(date.getMinutes())}:${padded(
date.getSeconds()
)}`,
"Europe/London"
);
}
This post was published with this method! I figured what better post to test it on than a post about the thing itself. Is that dogfooding?
Anyway, I hope this helps you figure out how to use Netlify Scheduled Functions to rebuild your own site!
I noticed that my Scheduled Function ran 3 times, and this was down to Netlify requiring an async function to be passed. I've updated the examples above.
]]>When I started to look into how to create a plugin, I quickly realised that in the Eleventy world, a plugin is just an extra .eleventy.js
file that gets loaded 🤯. What an amazingly simple way to create plugins!
Because it's just an extra .eleventy.js
, you have access to everything you can do in an Eleventy config file. In my case, I needed .addFilter
to add the webmentionsForPage
and webmentionCountForPage
filters, and .addGlobalData
to add the webmentions to the global data.
You can install it from npm and the load it using .addPlugin
, like this:
const Webmentions = require("eleventy-plugin-webmentions");
module.exports = function (eleventyConfig) {
eleventyConfig.addPlugin(Webmentions, {
domain: "codefoodpixels.com",
token: "ABC123XYZ987",
});
};
This adds a webmentions
global data object, and then in your templates, you can use the webmentionsForPage
and webmentionCountForPage
to filter webmentions
.
Full documentation, including a load of configurable options is in the readme on GitHub.
]]>border-radius
!
Adding rounded corners to items such as images and buttons makes them feel a bit softer and more aesthetically pleasing, but I didn't feel that regular rounded corners would suit the style of my website. That's when I had the idea to use CSS clip-path
.
CSS clip-path
allows us to determine what part of an element should be shown. We can define a path and anything outside that path will be hidden. There are a number of different shape functions we can use, these are:
circle
ellipse
inset
- defines an inset rectanglepolygon
- a set of x and y coordinates for a pathpath
- an SVG path stringFor this article, we'll be focusing on polygon
.
The polygon
shape function takes a set of x and y coordinates to make up a path, the x and y values being an offset from the top left of the element. Each of these sets of coordinates can be defined as any valid CSS length or as a percentage.
In all of the examples on this page, I'll be using images as they help visualise the things I'm talking about, but clip-path
can be applied to any HTML element.
The example below shows how we can apply clip-path
to an image using pixel values to show only a specified area.
This is fine if we know the size of our images and want to apply it to images with that particular size, otherwise we'll end up cropping out bits of larger or differently proportioned images. In the below example, the same image and clip-path
are used, but one image is 300px
and the other is 600px
. As we can see, it shows the same size section of the image, but it shows different parts of the image.
If we want to allow the clip-path
to flex and fit the image that we're applying it to, we can use percentages. This means that the clip path is based on a percentage of the image's dimensions, as can be seen below.
We can also combine fixed units and percentages to achieve a balance between the look that we want and flexability.
Finally, using the CSS calc
function means that we can achieve offsets from each edge while still staying flexible to different shapes and sizes of image.
So now that we have all of the bits of knowledge that we have for the CSS side, lets look at the corner itself.
Below there is an example of a pixel art curve, with outlines added for emphasis. As you can see, it's made of a number of "blocks" (called pixels) and each of these is placed in a certain way to give the idea of a curve. We need to replicate this shape with our CSS clip-path
.
To replicate the shape, we need to create a set of points that follow the outside of the pixels. If we start from the leftmost point that we need to define, we have x
being 0px
and y
being 5px
. We then need to step inwards one pixel to create the top of the pixel, so for our next point we have x
being 1px
and y
still being 5px
. Our next step is to follow the pixels up, so we have x
still as 1px
, but y
is now 3px
.
After repeating this to follow the entire curve, we get the following set of coordinates:
clip-path: polygon(
0px 5px,
1px 5px,
1px 3px,
2px 3px,
2px 2px,
3px 2px,
3px 1px,
4px 1px,
5px 1px,
5px 0px
);
This gives us the top left corner. By using calc()
and flipping horizontally and vertically, we can create the points for the other 4 corners.
As you can see in the below demo, while it compares well to the border-radius
example, it's not very noticable at the current scale because we're doing it on a single pixel basis.
To make it clearer, we have to scale the pixels up. I personally use a scaling multiplier of 4, so 4 on-screen pixels is one pixel in the design. To actually implement this, we take the fixed values from the previous example and multiply them by our scaling multiplier, so the top left corner would be the following:
clip-path: polygon(
0px 20px,
4px 20px,
4px 12px,
8px 12px,
8px 8px,
12px 8px,
12px 4px,
16px 4px,
20px 4px,
20px 0px
);
After applying this to the rest of the corners, it looks like the below demo:
We have some nice pixelated rounded corners! I think it looks great and fits in really well on my site. This technique can be used for all sorts of shapes though, and it doesn't have to be pixelated.
But if you do want to do pixelated rounded corners, you can save yourself the effort and use the generator that I created after I'd done this on my own site. With the generator you can choose your own pixel multiplier and radius rather than using the ones I've defined here, and you'll get a live preview!
]]>In my early teens, I found web development and I've been hooked ever since. Open source was, and still is, massively helpful for me to learn from. Whether that's taking apart open source projects to learn how they work, or contributing to them.
Recently, I found out that these two worlds collided.
I left school in 2008, right into the Great Recession. I wanted to get into web development as a career, but with no industry experience and no qualifications, it was difficult. After being unemployed for 9 months, I got a job as an IT Administrator. Attempt 0 at making web development a career was unsuccessful.
After a couple of years working as an IT Administrator and learning more about web development in my own time, I decided it was time to make another attempt at making a career as a web developer. I interviewed at a few companies and got 2 job offers, I accepted a junior developer role at a design agency and started my career as a web developer. Unfortunately I didn't really get any support, was given clients to handle on my own and ultimately was let go after 15 months due to not meeting their (unrealistic) expectations. Career attempt 1 was unsuccessful.
Among other things, I spent some time learning and writing some open source projects. After about 7 months, I started another development job and that was the start of career attempt 2. It was a much more supportive enviornment. I learned about community groups such as LeedsJS, I started writing and maintaining more open source projects, and started building things in public. That was back in 2013, and I'm still going.
In the years since, I've been building things and releasing them as open source projects, whether that's for people to use themselves, or for them to learn from, like I did. I've also contributed back to projects that I've used and found helpful.
And recently, one of those projects was used as part of a major scientific mission: The Mars 2020 Helicopter, Ingenuity.
Bootstrap 4 was used, and 5 lines of code that I contributed were part of that. It's only a small contribution, but it shows how wide reaching open source code can be. My code is on Mars!
This post is titled "Per aspera ad astra", a phrase which is part of one of my tattoos. It translates from latin as "through adversity to the stars", and it's pretty cool that this can now relate to my web development career. I had the adversity of getting into development as a career and the rocky start of getting fired from my first development job, and now my code is part of a mission to another planet.
It's mind blowing 🤯
]]>A Webmention is a way to let a website know that it's been mentioned by someone, somewhere on the web.
As an example: if I write a blog post and someone finds it interesting, then they can write their own blog post linking to mine and their website's software could send me a Webmention. I can then take that Webmention and display it on my website with a link to their article.
In fact, by linking to Amber's post above, I've sent her a Webmention. Cool, right?
Webmentions are a W3C Recommendation and is part of the IndieWeb movement. It's basically pingback but reimplemented without all the XML mess, just a POST request with the source URL (page that mentions the post) and the target URL (the post being mentioned).
From a high level, there are 3 steps you need to take to add Webmentions to your site:
So we'll go through these stages, and I'll talk about how I integrated it into my website with Eleventy.
To be able to recieve webmentions, you need to declare an endpoint. In his Parsing Webmentions post, Jeremy Keith talks about building a minimum viable webmention endpoint in PHP. It's definitely worth a look if you really want to build something, but otherwise you should use a service like Webmention.io, which is what I did.
To sign into Webmention.io, you'll need to set up web sign-in on your website. I did this by adding rel="me"
to the Twitter and GitHub links in my navigation, and ensuring that my Twitter and GitHub profiles link to my website.
Once signed in, you can find the tags you'll need to add to your head
tag that will tell other sites where to send webmentions. If you can't find them straight away, try going to the settings page.
In my case, the tags look like this:
<link rel="webmention" href="https://webmention.io/codefoodpixels.com/webmention" />
<link rel="pingback" href="https://webmention.io/codefoodpixels.com/xmlrpc" />
Now your website can get Webmentions!
So now that we're gathering Webmentions, we need to show them somewhere. The way that you do this entirely depends on how your website is built and how you gathered your webmentions, I'll be writing this with Webmention.io and Eleventy in mind but some of this will be transferrable.
To grab your Webmentions, Webmention.io has an API that returns data as JSON. Through the API, you can request all the mentions for your domain, and the mentions for specific pages. With Eleventy, we can grab this data and and display it nicely in our pages.
Eleventy has data files, and you can use JavaScript data files to do some processing at build time. This means that we can do a call to Webmention.io and grab all of our Webmentions before we process any of the website content. An example of how we can do this is:
const fetch = require("node-fetch");
const WEBMENTION_BASE_URL = "https://webmention.io/api/mentions.jf2";
module.exports = async () => {
const domain = process.env.DOMAIN; // e.g. lukeb.co.uk
const token = process.env.WEBMENTION_IO_TOKEN; // found at the bottom of https://webmention.io/settings
const url = `${WEBMENTION_BASE_URL}?domain=${domain}&token=${token}&per-page=1000`;
try {
const res = await fetch(url);
if (res.ok) {
const feed = await res.json();
return feed.children;
}
} catch (err) {
console.error(err);
return [];
}
};
If we save the above as webmentions.js
within the _data
folder of our Eleventy project, then we'll have an array of the 1000 most recent Webmentions for our website available under the webmentions
key in all of our templates.
We've grabbed the data, but it's just one big array for the whole domain. We need to filter this array so that we only get the mentions for the page that we're currently rendering. Within that data, we also have different types that we probably want to separate out into groups, such as likes, reposts and replies.
The different types of Webmentions supported by Webmention.io are:
in-reply-to
like-of
repost-of
bookmark-of
mention-of
rsvp
For my site, I'll be using in-reply-to
, mention-of
, like-of
and repost-of
, with in-reply-to
and mention-of
grouped together as comments
.
The steps we'll be going through are:
That would look something like this:
const { URL } = require("url");
function webmentionsForPage(webmentions, page) {
const url = new URL(page, "https://lukeb.co.uk/").toString();
const allowedTypes = {
likes: ["like-of"],
reposts: ["repost-of"],
comments: ["mention-of", "in-reply-to"],
};
const clean = (entry) => {
if (entry.content) {
if (entry.content.text.length > 280) {
entry.content.value = `${entry.content.text.substr(0, 280)}…`;
} else {
entry.content.value = entry.content.text;
}
}
return entry;
};
const pageWebmentions = webmentions
.filter((mention) => mention["wm-target"] === url)
.sort((a, b) => new Date(b.published) - new Date(a.published))
.map(clean);
const likes = cleanedWebmentions
.filter((mention) => allowedTypes.likes.includes(mention["wm-property"]))
.filter((like) => like.author)
.map((like) => like.author);
const reposts = cleanedWebmentions
.filter((mention) => allowedTypes.reposts.includes(mention["wm-property"]))
.filter((repost) => repost.author)
.map((repost) => repost.author);
const comments = cleanedWebmentions
.filter((mention) => allowedTypes.comments.includes(mention["wm-property"]))
.filter((comment) => {
const { author, published, content } = comment;
return author && author.name && published && content;
});
return {
likes,
reposts,
comments,
};
}
This is a long chunk of code, but we can set this up as a custom filter in our Eleventy config and then we can use it in our templates.
To display our Webmentions, we can render them with a partial like this:
{%- set postMentions = webmentions | webmentionsForPage(page.url) -%}
<h3>Likes</h3>
<ol>
{% for like in postMentions.likes %}
<li>
<a href="{{ like.url }}" target="_blank" rel="external noopener noreferrer">
<img
src="{{ like.photo or '/static/images/webmention-avatar-default.svg' }}"
alt="{{ like.name }}"
loading="lazy"
decoding="async"
width="48"
height="48"
>
</a>
</li>
{% endfor %}
</ol>
<h3>Reposts</h3>
<ol>
{% for repost in postMentions.reposts %}
<li>
<a href="{{ repost.url }}" target="_blank" rel="external noopener noreferrer">
<img
src="{{ repost.photo or '/static/images/webmention-avatar-default.svg' }}"
alt="{{ repost.name }}"
loading="lazy"
decoding="async"
width="48"
height="48"
>
</a>
</li>
{% endfor %}
</ol>
<h3>Comments</h3>
<ol>
{% for comment in postMentions.comments %}
<li>
<img
src="{{ comment.author.photo or '/static/images/webmention-avatar-default.svg' }}"
alt="{{ comment.author.name }}"
loading="lazy"
decoding="async"
width="48"
height="48"
>
<a href="{{ comment.author.url }}" target="_blank" rel="external noopener noreferrer">
{{ comment.author.name }}
</a>
<time class="dt-published" datetime="{{ comment.published }}">
{{ comment.published | date("YYYY-MM-DD") }}
</time>
<p>{{ comment.content.value }}</p>
<p>
<a href="{{ comment.url }}" target="_blank" rel="external noopener noreferrer">
View original post
</a>
</p>
</li>
{% endfor %}
</ol>
Now we have our webmentions rendering in the page!
Remy Sharp has built a great tool called Webmention.app that takes care of sending your outgoing Webmentions. You can pass it a page or RSS feed URL and it'll go through, grab any links and send Webmentions (or pingbacks) to any sites that support it.
The Webmention.app documentation has a few different ways to integrate it with your website. Originally I used an outgoing webhook within Netlify to get Webmention.app to send out my Webmentions, but I've now released a Netlify build plugin that doesn't rely on the Webmention.app website.
The Webmentions Netlify build plugin is a wrapper around the tool that runs Webmention.app, but by using the build plugin, you can avoid relying on a third party service. It'll all be done locally within the build!
Like a lot of people, I share my blog posts on Twitter to spread it a bit more, and I often get interactions such as likes, retweets and replies. Twitter doesn't support Webmentons, but I can use Bridgy to monitor Twitter for me and send Webmentions to my site for any interactions or links to my site.
But Bridgy doesn't just support Twitter, it supports a whole host of other social networks like Instagram, Facebook and Mastodon too!
I hope this has given you an insight into what Webmentions are, and a rough idea of how to implement them on your site. I originally thought it was going to be super difficult and complex, but there were some great posts, examples and resources that really helped.
I've now published a reworked and refined version of this as a plugin! You can read more about it in No Comment 2: The Webmentioning.
]]>I fall into two of the current categories for getting the vaccination: my BMI is just over 40, so I'm part of the "underlying health conditions" group; and I'm an unpaid carer for my partner.
My partner is clinically extremely vulnerable and was vaccinated yesterday, having booked on Saturday. Because she's vulnerable, we were eager to get mine booked in as soon as possible too. I started checking every day if I could book through the NHS COVID vaccine booking system and on Monday night, it let me book!
I'm staying at my partners house in Huddersfield this week in case she gets poorly from the vaccine side effects, so I booked myself to get vaccinated at the John Smith's Stadium in Huddersfield.
As I got close to the stadium, there were loads of temporary signs making sure people knew where the vaccine center was. Headed towards the car park and a firefighter let me know where to park and where the entrance to the vaccine center was.
I walked down to the entrance gate where someone guided me to the queue to get inside. While I was in the queue, someone took my booking reference and tapped it into a tablet.
After queueing for about 10 minutes, I was told to go into one of the booths where there were 2 people, one sat at a computer and one getting the vaccination ready. I sat down, was asked a few questions about things like allergies, my doctor's surgery and how I got to the vaccination, and then was given the injection. The syringe was more full than the flu jab, which meant that it took a few seconds to complete.
They gave me an information pack with stuff about the side effects of the vaccine, a card with the date and vaccine type that I had (Oxford/Astrazenica), and I asked for a sticker.
It's been about an hour and a half since I had the vaccine, and so far the only side effect I've experienced is a sore arm.
Throughout the process, the staff and volunteers were fantastic: they were well organised, professional, and reassuring. The whole thing was run so smoothly and the whole operation is a testament of our public services in spite of the complete mismanagement of the entire COVID pandemic by the Tory government. I'm so relieved to have had the first dose of my vaccine, and to have my 2nd dose booked for May.
If you're eligible, please go get vaccinated. If you're unsure if you're currently eligible, check the NHS COVID vaccine booking system.
]]>I've been meaning to convert my own site over for a while, and recently took the plunge and decided to do it. As well as giving me the opportunity to dig into Eleventy without a deadline pressing me, it also gave me the chance to make some stylistic changes.
My previous site was built with Hexo, which was my first foray into static site generators. Hexo is pretty prescriptive but it did the job well and gave me an easily updatable website. By comparison, Eleventy is super flexible and much more generic. There isn't a concept of a blog post in Eleventy, you just have named collections of pages with various metadata elements on them such as tags and dates. This means that you can structure your site however you want.
In moving over to Eleventy, I decided to rewrite a load of the HTML and CSS instead of doing a straight copy and paste. The code ended up being super similar, but I still felt it was worth it to make sure that I was doing the right thing.
Because you can hook into various parts of the Eleventy build process (like beforeBuild
and after a file has been processed), I don't have to maintain various different build processes and can just have a single coherent process to get my site up and running. I've used these hooks to do some CSS processing.
In the beforeBuild
event, I use postcss
to run my CSS through autoprefixer
to ensure that any properties that I've used are supported on older browsers.
I've also added a transform for HTML pages that grabs the CSS files for the whole site, runs them through purgecss
to remove any unused code for the current page and then runs the result through csso
to optimise it. The resulting CSS is then inlined into the page.
Eleventy also has a plugin called Eleventy Image that I've used to optimise the images. I've configured it to generate AVIF, WebP, JPEG and PNG versions of the images at widths of 1000px, 800px, 600px and 250px. It then generates a picture
tag with these in and the most optimal image is chosen by the browser depending on what it supports and what the viewport width is.
I also converted a number of my images to SVGs, such as my spaceship logo. I did this by exporting the 1:1 pixel versions of the images as PNGs from the pixel art editor I use (Aseprite) then converted them to SVG using an excellent codepen I found called Pixels.svg. I then ran the svgs through SVGOMG to optimise them. While Aseprite has the option to export as SVG, it exports each pixel as a rect, which means that the SVG has a massive file size.
I got to use one my favourite features of Eleventy when building this site: Data files. For my speaking page, I wanted a list of all my talks and where I gave them. The most sensible way for me to do this (to me at least) was to have an array of talk objects. Within each talk object I have some information about the talk and an array of events I've given the talk at. This can then be used to build out the speaking page in a nice way.
While rebuilding the site, I decided to make a few design changes. I still love the pixel art space theme and colour scheme, but felt I could make some of the bits more pixel-art like and make some things stand out more.
My old website just had the blog listing on the homepage and the same header as the rest of the website. I wanted to make the site a little bit more about me, so I moved the content of the about page to the homepage, and moved the blog to live under the /blog/
path.
I then decided to differentiate the header on the homepage a bit and changed it to take up almost the whole screen. I then added the asteroids from the hidden game I have on the site, animated the spaceship to move around a little more, and added some laser shots.
I'm really happy with how the homepage has turned out and think it gives my site a bit more personality!
The navigation on the old site stood out alright, but it never quite looked how I wanted. On the new site, I've made it a complete red bar, with the red buttons surrounded by a black border.
I've changed the 3D effect on the buttons too. Previously they were using border-style: outset
but I used CSS gradients to make the borders look a bit more like a pixel art button. I also took the oppotunity to add a rollover effect to the buttons, to make them look like they're pressed. It sort of reminds me of the rollover effects you'd get on websites in the 90s!
I've added an author bar to the bottom of my blog posts with a photo of me, my bio and some links to follow me on twitter and subscribe to my RSS feed. I feel that this makes the page feel a little more complete and makes it feel less like the page is abruptly ending.
]]>Introducing Do I need bunting today?!
Do I need bunting today? is a site that tells you whether bunting is appropriate, based on if it's a bank holiday in England, Wales, Scotland or Northern Ireland.
That's it, that's the site.
Because I could 🤷♂️
The GOV.UK website has a list of bank holidays available at https://www.gov.uk/bank-holidays. This list is well presented, easy to read and I frequently use it to see when bank holidays are because I forget.
But a while ago, I learned that if you add .json
to the end of that URL, you get a JSON represenation of the bank holidays, which includes a true/false value for whether bunting is appropriate.
So I decided to make a site using Eleventy with JavaScript data files. The site grabs the data from https://www.gov.uk/bank-holidays.json, manipulates it into a format that I can use to show information to the user and then it displays it in a massive, easy to read format for the user.
It's hosted on Netlify and uses a GitHub Action to rebuild the site every day using a Netlify Build Hook.
]]>The process for me was super easy! Because my HTML was already well structured and I had decent default sizes for my images, I just had to remove the links to the CSS files and it worked!
]]>While I was working at Sky Betting and Gaming, I was introduced to a pattern that they use called a viewbuilder. I find it to be a really interesting and useful idea, and we used it heavily in my time there.
At its most simple, a viewbuilder is a process that runs a task on a set interval to gather, manipulate and store data for later use.
The interval you set can depend how often the data will update and how fresh you want it to be. Most of our instances ran at a 1 second interval, but some ran at other intervals such as 5 and 10 seconds.
While you could gather and return this data as and when it is requested by the frontend, this can be slow and can introduce scaling and caching issues.
By using a viewbuilder we only gather this data once within a set interval, reducing the load on any APIs used and reducing the work done during the call from the frontend.
This may sound just like caching, but there are 2 key differences. The first is that caching is usually done on the first request. In cases of heavy traffic sites, this can result in a cache stampede and potentially bringing your site down. The second difference is that this is a transformed cache, so there is no extra processing necessary when making the request.
The downside of the viewbuilder approach is that the data is only as fresh as the frequency of the viewbuilder. This may be a problem for some use cases, but it was acceptable for us.
When we first start the viewbuilder process, we initialise things like logging and any necessary connections to data stores. We then set the interval to run the viewbuilder task.
The task is usually broken down into 3 stages:
At the read stage, we'd gather all the data we need for that particular process. This may be calls to internal APIs, calls to 3rd party APIs or even direct database calls.
We would frequently gather data from multiple sources at this stage, and would occasionally have cases where the data from one source would be used to gather data from another.
During the transform stage, the data from the read stage would be combined and manipulated to produce the desired output.
At the write stage, we would write the document into MongoDB for later use. We would also add a timestamp so that we can see how fresh the data was when debugging.
]]>So it may come as no real surprise that when I heard there was going to be a Christmas jumper competition at work last year, I decided add LEDs to my jumper and connect it to the Internet.
I didn't want to start with an off-the-shelf Christmas jumper, that feels a little like cheating. Instead, I decided to get a plain jumper and add my own stuff to it. I went with a dark red jumper with a chunky knit from Primark that cost £8.
I then started thinking about what I was going to do for the design. The most obvious idea to me was a Christmas tree, as I could scatter LEDs over it and light it up like an actual Christmas tree.
As I'm no good at sewing and don't possess a sewing machine, I asked my sister Cerise for her help at this stage. I cut a Christmas tree shape out of felt, and she sewed it onto the jumper for me.
Once that was done, I got some sparkly pipe cleaners and used a hot glue gun to attach them to the felt in a tinsel-like design. At this stage I also marked and poked holes in the felt for where the lights would come out.
For the lights I started with a set of battery powered Christmas lights that I bought for £1. I chose these because the fact that they were battery powered meant I could wire them up to a microcontroller and control the power to them to make them flash or fade.
One of the boards that I love to use when I build things is the ESP8266 as it's cheap, has wifi and, by flashing Espruino on it, can be programmed with JavaScript.
The Christmas lights were originally powered by 2 AA batteries in series, which means that 3 volts of power were being delivered to the lights. The ESP8266 has a 3.3v pin, which I used to power the lights instead.
To be able to control the lights, I used a transistor connected between the lights and the ground pin, with the control pin connected to one of the GPIO pins on the ESP8266. This meant that I could control the resistance that the transistor was providing, and could use it to dim or flash the lights. You can see a wiring diagram below.
The last step of the hardware side was putting the lights in the jumper. As I'd already created holes in the felt for the lights, I just needed to get the lights through the jumper itself.
Part of the reason that I chose the chunky knit jumper was so that I can push the lights through the gaps in the knit and then push it through the holes in the felt.
One issue that I ran into was the lights shifting and occasionally falling behind the felt. This was pretty annoying, but fix was pretty easy. I added a dab of hot glue to each light to hold it onto the felt.
An Internet connected Christmas jumper isn't complete without being connected to and controlled from the Internet.
As mentioned earlier, I'm using the ESP8266. It has wifi which means I can do some communication. When I'd attended JSHeroes in 2018, Stephanie Nemeth talked about connecting her LED projects to the Internet. One of the key takwaways for me was that sockets aren't necessarily the greatest way to connect, and that MQTT may be better. Stephanie's talk is up on YouTube.
With this in mind, I set about building the software with MQTT as the communication method. As part of Espruino there's a MQTT module, which makes the hardware side easy. On the web side I chose to use MQTT.js, and I chose shiftr.io as my MQTT broker.
I'm not going to go super in-depth on the web app, but I used plain JavaScript, HTML and CSS to build a form for the various ways to control the lights. With the fade and flash options, the user has the choice of controlling how long the lights stay on and off for. As I use MQTT, I don't need any server-side code for this app which means that I can deploy it as a static site on Netlify.
I'm super happy with the final product. It's great fun to have people play with it, even from across the world!
You can find all of the code in the jumper-lights repo on my github, and below is a video of my friend Beth controlling it!
]]>Many community groups try to keep their costs as low as possible, and those that get sponsorship usually get it on a month-to-month basis and attibute it directly to something. This makes paying for a 6 month subscription to a service difficult, and organisers end up paying themselves.
I also felt that the service that Meetup was providing for the cost wasn't great. Over the past year or two, they've updated the site and have taken away a lot of flexibility.
There were 3 key features that we used on Meetup that we needed to replace, as well as a couple of extra features that I wanted.
Event page: We needed a page for each event detailing the talks, the venue, the date and time, and the sponsors for that event.
RSVPs: We need to have an idea of how many people will be attending, and to be able to set a maximum number of attendees.
Mailing list/email notifications: We want to be able to contact our members with long form event notifications, as well as any other messages that we want to share with them such as conference discounts.
Speaker pages: I wanted to be able to have profile pages for speakers so that people can see any previous talks that they've given and find any links that the speakers wanteed to share.
Talk pages: I wanted to have pages for the talks so that after the talk we can share the video on there with all the other details of the talk.
Instead of implementing them myself, I decided to use external services for RSVPs and emails. These are usually complex systems with a lot of moving parts, and I felt it was much better to rely on tried and tested systems.
For RSVPs I decided to use Tito. Tito is free for free events and has a pretty streamlined experience when getting a ticket. It also has a very good admin experience.
For email I chose Mailchimp. Mailchimp has a pretty good free tier, and their limits surpass anything that we need.
In the past couple of years, I've rediscovered my love for static sites, and static site generators have played a massive part of that. I use Hexo to build my own website, but I'd heard great things about another static site generator called Eleventy and decided to give that a go.
One of the great features of Eleventy is the ability to have global data files. This allows you to define your data in JavaScript or JSON files, and have it available to use in your pages. I've used this heavily with the LeedsJS website, as it means we can split the data and use it in various forms.
I've broken the data down into 4 sections, which are linked in various ways
This contains information about a speaker. This is the file for me on the new site:
{
"id": "luke-bonaccorsi",
"name": "Luke Bonaccorsi",
"bio": "...",
"picture": "luke-bonaccorsi.jpg",
"twitter": "CodeFoodPixels",
"links": {
"Website": "https://codefoodpixels.com",
"GitHub": "https://github.com/codefoodpixels"
}
}
This contains information about a talk. It links to a speaker using the speaker's ID.
{
"id": "coding-is-serious-business",
"title": "Coding Is Serious Business",
"speaker": "luke-bonaccorsi",
"abstract": "...",
"date": "2019-02-27",
"youtube_video_id": "CWiiKljO7D0"
}
This contains the information for an event. This includes the talks, sponsors and the dates of various stages.
The site uses the announce_date
property to decide whether to show the event on the site, and it uses the ticket_date
property to decide if it sould show the ticket button.
{
"id": "2019-02-27",
"title": "February - Luke Bonaccorsi & Wade Penistone",
"blurb": "...",
"talks": ["coding-is-serious-business", "mindstack"],
"sponsors": [
"sky-betting-and-gaming",
"bruntwood",
"starlight-software",
"jetbrains",
"frontendne"
],
"date": "2019-02-27",
"start_time": "18:30",
"end_time": "20:30",
"ticket_date": "2019-02-20",
"announce_date": "2019-02-01"
}
This contains the information about a sponsor so we can display it wherever we need to.
{
"id": "sky-betting-and-gaming",
"name": "Sky Betting & Gaming",
"url": "https://www.skybetcareers.com/",
"logo": "sky-betting-and-gaming.png",
"twitter": "SkyBetCareers"
}
After deciding how to split the data, I had to build the pages and decide how to structure the site. This fell into a similar pattern as the data.
For the homepage, I wanted the next event to be the main focus. The main purpose of the group is to hold these events, and I feel it's the primary reason that people visit our website.
I didn't want to overload the homepage with all of the information from the event, so I decided to limit it to the title, date, time, event blurb, talk titles and speakers, and then the buttons for more details and tickets. I feel that this gives a pretty good overview of the event and people can click through to get more details if they want.
I also added some information about the group itself and the venue for our events.
Each event gets a page on our new site. This page contains all the details about that event, including the talks, sponsors and ticket information, as well as the date, time and blurb for the event.
When it's the current event, this page is linked to from the homepage and any communications such as emails and tweets. When the event is over, the page still has all the details of the event and also embeds the videos for the talks.
We also have a listing of all of the events (starting from our first event of 2019) with the most recent at the top.
Every speaker has their own profile page on the site, which includes their biography, an image, any links that they want and links to all the talks that they've given.
All the speakers are also listed on a directory page in alphabetical order.
Each talk has a page that includes the title, the date the talk was given, a link to the speaker's profile page and the abstract for the talk. After the event, the youtube video for the talk will be embedded too.
There's also a listing page with all the talks in alphabetical order.
This was an addition that I made after we'd launched the website and sucessfully used it for an event. The website generates a feedback form for attendees to submit feedback about the event and the talks.
I wanted the site to be quick, so I took a few steps to help this.
The easiest step was ensuring that we don't serve any huge images. As part of the build we have a script that runs to resize them all down to a maximum of 300 pixels in either width or height. This is the largest that an image will be displayed on the website.
Another step that I took was to leverage caching. While this doesn't have an impact on the initial load, it does on subsequent loads. To do this, I use a service worker to store assets in a cache, and the browser will look in the cache before then trying the network.
After reading the "How we built the fastest conference website in the world" post from the JSConf EU blog, I decided to follow the same process of building a stylesheet of styles for a particular page and then inlining it into the page. This improved our rendering time significantly.
Finally, I tried to use as little JavaScript on the website itself as possible. While I love JavaScript and we're a JavaScript group, it'd be irresponsible to add a load of page weight with unnecessary JavaScript.
I'm lazy. I don't want to have to do a load of little tasks every time we announce a new event, or when tickets become available, or when the event, or when the event is finished...
So I set out with the goal of being able to automate as much of the site as possible. Initially I thought this was going to be something I ended up doing after we'd used the site for a bit, but the way that I'd structured the data really helped here and meant that I was able to do it fairly easily.
Because some of the stuff such as event announcements and ticket releases are driven by time, I want to only show those things when they should be available.
In the data, I store dates for the event announcements and ticket releases which I then check against when generating the site. If it is currently on or past the date, then it shows the content.
But this still means that I have to build the site every day, and I don't want to do that manually. Thankfully I can combine Netlify build hooks with a scheduled serverless function to rebuild the site daily.
Because all the data is linked together, I can generate a JSON file with a load of data about the next event such as the title, talk titles, speaker information and dates like the announcement date and ticket release date. This file then sits on the site so that it can be used for other tasks.
As I mentioned earlier, we're using Mailchimp for our emails. One of the reasons that I didn't mention for choosing them is that they have an API that we can use to create and send email campaigns. Additionally, you can provide your content as HTML through the API to go into a template.
As part of the site build process, I generate the html for the various emails that we send. I then have a scheduled serverless function that runs 15 minutes after the rebuild of the website which grabs the next-event.json, checks if it should be sending any emails and if so, grabs the relevant HTML, builds the campaign and sends it.
Besides email, our other main way to communicate about the event is via Twitter and we do this at about the same cadence as our emails.
I have a scheduled serverless function that I use to post through Twitter's API. The script grabs the next-event.json, determines if a tweet should be posted that day. If so, it then determines the content for the tweet and then posts it through the API.
We're using Tito for our tickets, and Tito also has an API that we can use as part of the automation.
In a scheduled serverless function, we grab the next-event.json and check if it's announcement day. If so, then we create the tickets through the Tito API and set the tickets to be available on the ticket release day.
When possible, we stream our talks on YouTube. This means that people who can't make the event can watch the talks as they happen, as well as getting a recording for later. As part of this, we want to put some information about the speaker and the talk on the stream, as well as the video feeds.
As we have all the data we need in the data files, we can generate a HTML page with a fixed dimension and then put the information about the speaker and talk into the right areas.
When it comes time to stream, we can drop this page into OBS with the browser plugin and then have this as part of the stream.
Another thing that gets generated as part of the website is the slides for the introductions at the start of the event. While some of the stuff on the slides is static (such as the code of conduct, social media links and mailing list info), other stuff is based on data from the event. When generating the site, we pull this information from the event, talk and speaker data and render it into a HTML page that uses CSS scroll-snap to create slides.
]]>The biggest drive for LeedsJS is sharing knowledge so that our attendees can become better developers. In recent years I've expanded the scope of this by adding another talk spot where we try and get talks on topics such as mental health, testing and inclusivity. These are topics that make people better teammates, and this fits well with the goal of helping people become better developers.
As a group, we welcome people with all levels of experience, from those that have been writing JavaScript since it first came around, to those who have no real experience writing it at all. This mix means that some of the junior developers who attend have had a real boost from being part of the community, through speaking to more experienced developers and asking questions.
We also provide a platform for people who want to share their knowledge and try public speaking. The majority of our speakers are local, and over the past couple of years we've had a number of first-time speakers who have gone on to speak all over the world (myself included).
Another thing that I've been pushing is ensuring that we make the group inclusive. I've made some changes over the past few years with the aim of reducing the barriers for people to attend our events, such as removing Q&A, moving our events to a alcohol free venue and accounting for dietary restrictions when picking the food.
Independent community groups are hugely important to the tech scene in any city, as they bring people together from all backgrounds with the sole aim of making everyone better. In Leeds we're very lucky to have a number of really fantastic grassroots community groups run by organisers who volunteer their time, and I'm proud to be part of that group.
]]>At the time, we were based in a pub but our the diversity in our audience was little to none. After doing a little research, I realised that the venue was a big contributing factor to this and we were unintentionally excluding people by holding it there.
The first thing I came across is that Muslims are forbidden from entering a pub by their religion, even if it's not to drink. Straight away this pushes away a chunk of our potential audience.
Then, I was discussing it with Chris Manson and we realised that other people might feel uncomfortable or unsafe around alcohol and people drinking too. This includes women, alcoholics and minority groups.
Our event wasn't focused on drinking, it was about meeting people, learning and sharing knowledge. We didn't need the alcohol.
We found a great community venue and we made the pub a post-talk thing instead. Eventually we also realised that using some of the event budget for something that not everyone can be a part of is unfair and we made it an unofficial part of the event, using the budget to get vegan, vegetarian and gluten free options for our food.
Since making these changes, I've definitely noticed a much more mixed audience and I'm really pleased with this. I know we still have areas to work on, but this was a great move for us.
]]>If you want to see last year's post, you can find it here.
At the end of last year's post I set a few aims for this year. They were:
I'll go into it a bit more below, but I feel like I've hit these aims.
2018 was a good year for speaking for me.
I've spoken at a number of meetups, having given my home automation talk at Tech Nottingham and the Fusion Meetup in Birmingham in May and June; and then debuting my Web Bluetooth talk at LeedsJS in May too. To end my year of meetups, I spoke at the excellent FrontendNE, where I did a double feature of both talks.
In July I gave my home automation talk at the Fullstack London and ScotlandJS conferences, both of which I had a brilliant time at and met some fantastic people.
Before this year I'd never been to South America, but towards the end of the year I went twice to speak at two separate conferences. First, in October, I headed to Buenos Aires in Argentina to speak at Nodeconf Argentina and then in November I went to Medellín in Colombia to speak at JSConf Colombia. Both were amazing experiences in super interesting cities with more great people! I was also fortunate to be able to extend my time in both cities to explore and eat some amazing food!
In August I wrote about taking steps to deal with my mental health better (you can read that post here), and I've made some progress.
I went through the "depression recovery course" and while it didn't help, it was a step in the right direction.
A few weeks ago I started weekly counselling sessions. While I'm still unsure about it, I've spoken about some of the issues that I struggle with and talking about it has been a relief in itself.
This year we've released some great improvements and features to the Sky Bet site, and I'm proud of the work that we've done.
I'm still learning stuff all the time and I'm still really enjoying the chances I get to mentor.
This year we've had a load of amazing talks on both technical and non-technical topics at LeedsJS.
This year I decided to stop Q&A, and I feel that this has been a success. You can read more about this decision here.
I also wrote about our giveaway process here. Since this post, I've also started selecting a second winner from any tweets made with the LeedsJS hashtag during the event.
Finally, I made the decision at the end of this year that I'm going to move LeedsJS away from Meetup.com during the next 6 months. They charge $90 for 6 months for what I feel to be a inflexible and sub-par platform. I'm going to be moving to using the LeedsJS website, using a ticketing system like Tito and a mailing list system like Mailchimp.
At the end of last year and the beginning of this year, I built an ESP8266 based, JavaScript powered thermostat to control my heating. Until I moved house in July, this controlled my heating. The real test of this is when I went on a trip to Scotland in February and set it to the minimum temperature. It successfully controlled my heating and my pipes didn't freeze!
My next project was the LED display for my web bluetooth talk. I had real fun working on this and learning about bluetooth.
My final project of the year was automating my Christmas jumper. I connected it to the internet and allowed people to control it from around the world. I even did a live stream of it on Youtube. I hope to have a post written about it soon!
Next year I'd like to do the following:
I'd been at the job for 15 months when I was fired and had spent a chunk of that time going through the disciplinary process for working too slow and not working to an expected quality.
Looking back, the situation was pretty ridiculous. I'd been hired as a junior developer (having never worked in the industry and having been completely self-taught), I was expected to work on projects completely on my own (including testing my own work) and I was turned down whenever I asked for help. I'd stayed late evenings and weekends to work on projects, trying to get them finished and to the level of quality that was expected, but it didn't help.
Throughout the disciplinary process, I was told that I needed to work faster and improve the quality of my work, but was never given the help I needed to do this. This eventually led to me being fired.
Immediately after being told I was fired, I collected my things. I couldn't face my now ex-coworkers and rushed my way through the office, out to my car and left.
I drove for about 15 minutes before deciding to stop off at a supermarket. The second I parked up and switched the engine off, I broke down and started crying. I'd fucked up, I was a failure.
When I finally got home, I told my housemate, friends and parents what had happened. I felt super low and afterwards, I just sat on the sofa, watched TV and ate junk food.
My friends helped me that evening, we hung out and got drunk. The support was hugely appreciated.
In the following months, depression hit me hard. I was burning through my savings to support myself and I was applying for any job I felt that I could do, but I was getting rejected from everything.
I had nothing to do during the day besides look for jobs. My life was empty. This meant that I started going to bed later and waking up later. Eventually, I got to the stage that I was going to bed at 6am and waking up at 1pm.
Impostor syndrome added itself to the mix. I'd been discovered as a fraud and had been fired. The question of whether I should keep trying to be a professional developer weighed heavy on my mind.
The interviews that I did get were tough. It's really difficult to convince someone to hire you when you're plagued by impostor syndrome. It's also really hard to be honest about being fired, so I tried to avoid it and talk around it. Looking back, this probably didn't work in my favour and was likely a mark against me.
I was really fortunate that my parents offered to take my sister, her fiancé and I on holiday to Cyprus for Christmas that year. It was great to get a bit of a break and to focus on relaxing a little.
It was also an opportunity for me to reflect. I seriously considered my future and what I wanted to do. I came to the decision that I truly enjoyed development and wanted to keep trying to make a career out of it. I still felt like a fraud, but I didn't know what else I was any good at.
In the new year, I took some steps to get my life back on track. After spending months living only off my savings, I'd run out of money. I signed up for unemployment benefit and housing benefit.
I pushed myself to apply for development jobs and got a few interviews, but I was still struggling to be open and honest about being fired. When I did start being honest about it, I was surprised by the reaction. Nobody felt that firing me was the right way to go about things, which made me feel more comfortable about speaking about it.
After a few more months of job searching and interviewing, I finally managed to get a job offer. The hiring manager throughout the process made me feel super comfortable and that meant I could be completely honest.
I started my new job on the 2nd of April 2013, just under 7 months from when I'd been fired. The impostor syndrome was still strong, but I had a second attempt at my career.
I'd managed to join a company with a very supportive development team. From the first day, I was learning and growing with the help of my colleagues. I stayed at that company for over 3 years and it really helped me develop my career.
Looking back, I'd have never thought I'd be where I am now. I'm working for a great company with a fantastic team, I'm speaking at conferences all over the world and I'm helping push the JavaScript community in Leeds forward by running LeedsJS. I still struggle with impostor syndrome every now and then, but I'm leaps and bounds beyond where I thought I'd be.
Overall I think it was a pretty ridiculous situation. I nearly ended up taking a completely different career path, although I have no idea what that might have been. I struggled with some deep depression and impostor syndrome. I completely destroyed my sleeping patterns. It was a life-changing situation, but in the end, it all worked out for me. I'm incredibly lucky.
I hope you never have to go through this experience, but if you do then please know that you're not the first. Many of us have experienced this and made it through, you can too.
Although it really fucking sucks.
]]>While this post is difficult to write, I want to be open about it. I hope that this can help other people feel like they can talk about their mental health too.
Since I was in my early teenage years, I've struggled with low moods and low self-esteem. My way of dealing with it has been to either to accept it and put up with feeling like that, or ignore it and carry on.
But a couple of months ago, I found out some things that made me realise that I'd not been coping with my mental health properly and that my way to deal with it wasn't great.
I decided that I needed to speak with a doctor and get some help with it. One of the effects of my struggle with mental health was that my low self-esteem meant that I didn't want to put my problems on other people. Looking back at it now this is ridiculous, but it meant that I hadn't been registered with a doctor since I moved out of my parents' house. My first step was to get this sorted.
I found that you can do a large portion of the registration process online, which hugely reduced the barrier for me. After doing the online part, I had to go in and organise an appointment with the nurse to finalise everything. The whole process took a few weeks in total.
After registering, I could book an appointment. Again, this was something I could do online which made me feel way more comfortable doing it.
A few weeks later, I had my appointment with the doctor. She was very friendly and made me feel comfortable discussing the way that I felt and the issues I was having. In the weeks leading up to the appointment, I'd started creating a list on my phone of things that I wanted to mention and this really helped me too.
The doctor said that there was nothing that she could do directly, but told me to use the Leeds IAPT self-referral system. This is an online process where you fill out some forms and questionnaires about the issues that you're facing. It took me about half an hour, but I filled it out on my phone during the bus ride to work after the appointment.
The questionnaires ask some questions such as how often you feel low and how often you felt little interest in doing things. This can be difficult to judge yourself on, especially if you've been living with a mental health issue for a while and it has become a somewhat normal part of your life. It also states to only look at the previous 2 weeks, which can greatly skew the outcome if you've had a good couple of weeks.
Due to the number of people that refer themselves, the IAPT says that it may be up to 5 weeks before they contact you. For me, I was contacted by email about 3 weeks later but was told that based on my results they may not be the most suitable service. As I'd had a couple of good weeks before I'd filled out the questionnaire, they asked me to fill it out again but as if I'd had a couple of bad weeks.
Shortly after I'd emailed back my updated questionnaire, I got a phone call from them. I was told that based on my results I had moderately severe depression and they gave me a few options.
First, they told me about the online resources they offer so that people can try and help themselves. Next, I was told about a couple of short group-based courses that try and give people an understanding of how to cope and deal with depression. Finally, I was told about one-to-one therapy.
As we discussed it, I didn't feel like the online resources would help me so we discounted that. I was then told that the one-to-one therapy has a 9-month waiting list and that by going on one of the courses, I join the waiting list anyway.
One of the key differences between the courses for me was how involved I have to be as a participant. I'm still very uncomfortable speaking about my mental health in person, especially to a group of strangers. This made me opt to go for the "depression recovery course", which is a low-involvement course that's split into weekly 90 minute sessions over a period of 6 weeks. Because of my prior commitments to speaking at the Fullstack and ScotlandJS conferences, I couldn't start the next available course, so I was signed up to the following one. I start on the 28th of August.
I'm still unsure how much it'll actually help me, but I'm trying to be open minded and I'll hopefully learn as much as I can. If it ends up not helping, then at least I tried and I can see what the next step is. It's a start.
Finally, I'm really thankful to everyone who's been supportive and helpful so far. Being comfortable and able to talk about a difficult topic like mental health with someone helps a lot and I'm incredibly lucky that I have friends, co-workers and an excellent manager that I've been able to talk about it with.
Thanks to Beth North for reading through this before I posted it
]]>Before April's event, we'd always had Q&A after the talks. Over the 3 years that I've been involved in LeedsJS, I've experienced a few reoccurring issues with the Q&A part of the talks.
This is one of my least favourite things I've had happen. Someone asks a question but puts it in a way that is intended to try and show that the asker knows more about the topic than the speaker.
This really sucks because the speaker has just given up a huge chunk of their time to share their knowledge and teach others. The asker has not and is basically stroking their own ego.
This is another one I really hate. The asker in this instance is basically asking the speaker to do work for them for free, on top of the work they had to do to prepare and deliver the talk.
What makes this worse for me is that it's likely that the asker came to the event with this question in mind, planning to ask the speaker to do this free work.
It's also disrespectful to the audience, they don't care about your specific code issues and they have no context for them anyway.
I feel this is another example of the asker stroking their own ego. Rather than ask a question, the "asker" makes a statement, usually trying to correct something the speaker said or giving their opinion. To me this is hugely disrespectful, especially considering the time and effort that the speaker has put in.
This usually happens when someone focuses on a small aspect of the talk that wasn't really relevant to the central point. It may be of interest to the asker but isn't related to what the speaker wanted people to take away from the talk or even something that the speaker has an opinion about.
While most people like a joke, unfortunately people aren't very original when it comes to thinking one up. Often when there's a joke question about a talk, the speaker has heard it multiple times already. To them, the humour has likely worn off and become tedious, especially if it's something related to them and not the talk.
While there can still be good questions in a Q&A, it's still usually something that only a handful of your audience actually wants to know more about. The rest of your audience is becoming disengaged and just wants your event to progress. The question that's largely irrelevant to the talk and the "I have this specific problem with my code, fix it for me" also fall into this category.
On top of the things I've seen happen before at LeedsJS, there are some things that I also see as areas where Q&A falls down or can be an issue. This stuff is either from my own experience as an organiser/speaker/attendee or what I've learned from others through discussions.
For some folks, being put on the spot to answer something that they've not been able to prepare for is anxiety-inducing and something that they want to avoid.
You may argue "you're expected to know this stuff" but I disagree, talks aren't only for experts. They're for sharing experiences and ideas, for inspiring folks to try something and for encouraging people to learn too. It's fine to not know the answer to a question, but being asked to admit this on stage in front of an audience is likely something that people want to avoid.
Similarly, you may argue "you can't prepare for everything in life", but giving a talk is an already stressful situation that someone has spent a lot of time on preparation for. Adding a wildcard element into that is unnecessary extra stress.
After publishing this post, Nic on Twitter mentioned that some folks don't like asking questions in front of a room full of people. This is an aspect that I'd forgotten to mention but one that I've experienced myself.
Public speaking is one of the most common fears that people have and in this situation asking a question is public speaking. This means that many people won't want to ask a question, no matter how much they want to know the answer. It can also mean that people can leave the talk feeling frustrated and confused, which is the opposite of what a speaker wants.
In my experience, the questions asked usually need some sort of back and forth to get an answer that both sides are happy with. This is really difficult to do when the speaker is on stage or at the front of the room and the asker is amongst a sea of faces.
Having a conversation allows the speaker and asker to be on the same page and get an answer that they're both happy with. On top of this, the speaker can ask other people questions when in a conversation situation, meaning that everyone can learn something.
Often when a speaker ends their talk, they end it with something like a point that they want the audience to think about or with a big finish to wow the audience. The mood is then brought down by the Q&A and usually ends at the lowest point of lingering silence to see if there are any more questions.
At LeedsJS we have 2 talks split by a 20 minute break. Instead of Q&A, I encourage people to come and chat with the speaker after the talk. This means that only those who are interested in learning more can be part of the conversation instead of everyone being forced into it.
During this time I make sure I'm around the speaker for support and so that I can step in if someone is being rude or if there's an issue.
The conversations after the talks at the April event were great, the speaker and attendees all seemed to be enjoying the conversations. There seemed to be a lot of back and forth and there was some great discussion about parts of the talk or things related to it.
I've declared the trial at April's event a success. Q&A is not coming back to LeedsJS.
This has been something that's been in my head for a while, but a few things recently brought it to the front of my mind.
Firstly, I was picked to speak at ScotlandJS 2018, which has something they call the "discussion track". After every 3 talks, there's a 20 minute break where the speakers from those talks will be in an advertised location for people to come and chat with them.
I also attended the JSHeroes conference the week before April's LeedsJS event. While the conference was great and the organisers tried to make the Q&A section as engaging as possible, it still suffered many of the issues I've mentioned in this post. This is not the fault of the organisers, the work they did was fantastic!
Another thing was that Kitze shared his "awesome conference practices" document. There was some healthy discussion about Q&A as part of that and I realised that other people shared my views on Q&A.
]]>I guess this should be the first step, because you need to have stuff to be able to give away stuff. I know of a few ways to do this:
Ask companies to give you stuff. When I first got involved in LeedsJS I reached out through Twitter to one of the developer advocates at Google and after a few emails I had some Google merchandise to give away.
Company sponsorship schemes. A few companies have proper schemes where they'll give merchandise or their products away to community groups. A couple of examples are Jetbrains and GitHub.
Get sponsors to buy you stuff. If there are companies interested in sponsoring your events, an alternative to getting them to buy the food or drinks is to ask them to buy some prizes. This could be books, subscription codes or software licenses, but that's something to discuss with the sponsor.
Conferences will sometimes give you a ticket to give away in return for a bit of promotion, so it may be worth asking. They will also sometimes have a discount code that you can make available to your members.
With LeedsJS we've tried a couple of ways to give our prizes away.
Originally we started setting challenges around the main talk, so if we had a talk about Angular then the challenge would be to build something with Angular.
We'd get a couple of entries around some of the more interesting topics, but usually struggled to get anybody to enter. Ultimately we gave up with challenges because of this.
I do think this is a strong idea, but I think it would be better suited to framework or library specific groups because a lot of the topics will be based on the foundation of that framework or library whereas LeedsJS covers JavaScript in general.
The current way that we're doing our giveaways at LeedsJS is through a prize draw. As part of the introduction to the event, I have a slide with a link to a Google form for attendees to fill out. At the end of the event I then run a script attached to the form that will pick a random entrant as the winner.
If you feel like this is the right approach for your group, then I have a template form that you can use to get started. The form has the script attached to it already, so all you have to do is make a copy, customise it to suit your giveaway and then share it. When you come to select your winner, click on the puzzle piece in the top right, select "Choose random responder". You'll need to auth with your Google account so that it can read the data. You can get the form from this Google folder.
Currently this form only asks for the entrant's name, but after a discussion with my friend and former co-organiser Chris Manson I may update the one for LeedsJS to ask for some feedback about the event/group too.
]]>document.querySelectorAll
which is really useful for easily finding elements. I had assumed that it returned an Array but I've just found out that I was wrong and it returns a NodeList.
A NodeList is still a collection, but it doesn't have the Array prototype methods such as .filter
or .map
. You can convert the NodeList to an array using Array.from()
or you can use items from the Array prototype using .call
like this:
var elements = document.querySelectorAll(".content__article");
elements = [].filter.call(elements, function (el) {
// Your filter code goes here
});
]]>There are a few ways to blackbox a script:
In the settings section, go to the "Blackboxing" tab
Click "Add pattern..."
In the text box, enter either a script name or a regex pattern that will match scripts that you want to blackbox.
Click add
In the sources section, open the file you want to blackbox
Right click in the editor pane
Select "Blackbox script"
When paused on a breakpoint, go to the sources section
In the call stack pane, right click on a function from the file you want to blackbox
Select "Blackbox script"
The company I was working for at the time had a huge industry show at the end of January every year. This was a chance to show off new features and demo versions of future features and led to working a lot of overtime on evenings and weekends in the run up to it.
My commute was at least an hour and a half of driving each way and I was tired and mentally drained meaning I wasn't paying I couldn't fully pay attention, no matter how hard I tried. As I was about half way to work one morning, I came over a ridge to find a van doing a turn in the road with a queue of 4 cars stopped in front of me. I tried to slow down but couldn't react quick enough and swerved towards the kerb to avoid hitting the Porsche at the end of the queue. My car mounted the high kerb and slid along it on it's underside past 3 of the cars and scraped along the side of the last one before landing back on the road.
I pulled the car over to the side of the road and checked that the people in the other cars were alright. I then checked out my car and it was in awful shape. The front axel had snapped when I hit the high kerb, the driver's side door was dented and scratched from scraping along the last car, and it was leaking fluids due to the underside being damaged when sliding along the kerb.
A passing police officer pulled over to help when he saw what had happened. He helped us make sure that we'd done everything we needed to, called a recovery truck to collect my car and gave me a lift to the bus station so I could get to work. I'm super thankful that he stopped as I wasn't in a great mental state to deal with this and his guidance was extremely helpful.
Having totalled my car, my commute increased to 2 and a half hours each way and I had to cope with this as well as the high work load and having to deal with the insurance company. I was already thinking about moving to live somewhere closer but the longer commute made it more urgent, so I also had to deal with finding somewhere to live and packing.
I found somewhere, moved there in early February and took a few days off to relax, recover and reflect. I really felt the difference and realised that I was burnt out and that burnout has real effects. I'd been spending my own time, exhausting myself and putting my own health at risk for a company that in the long run didn't really care about me.
I stopped doing overtime and started fighting to stop the causes of it. My personal time since then has been spent doing stuff for me, whether that's playing games, working on my open source projects, cooking or doing pixel art.
Burnout has real effects. Take care of yourself, your health is important.
]]>The first thing we have to do is configure the repo on GitHub.
Open the "settings" page on your repo and scroll down to the "GitHub Pages" section
Select the source that your GitHub pages site will be built from. There are a few options here:
gh-pages branch - This means you can use a separate branch on your project just for your documentation.
master branch - This means that the content of your master branch is used. I use this for this website.
master branch /docs folder - This uses the contents of a "docs" folder in the root of your master branch for the site content.
None - This disables GitHub pages for the repo.
Set your custom domain in the "Custom domain" section
Note: The "Enforce HTTPS" box gets disabled when you add a custom domain. This is fine as we'll be enforcing HTTPS in the next section.
To add HTTPS to our GitHub pages custom domain we'll be using the free tier of Cloudfare.
During the process of adding your site, you'll be shown a list of DNS records for your domain. Alternatively you can go to the "DNS" section of the dashboard.
You need to add CNAME
record for www
that redirects to your GitHub pages URL. For me that means I have it set to lukeb-uk.github.io
. Make sure that the traffic is set to go through Cloudflare with the orange cloud.
You could also add your root domain instead of www
and this would mean it redirects to GitHub pages when you hit the root domain. This is how I have it set up.
On the dashboard, go to the "Crypto" section. In the SSL section you should select "Full" and not "Full (strict)".
Further down the page there's a setting for "Always use HTTPS". Set this to "On". This means that any HTTP connections will get forced to HTTPS.
You now have a site hosted on GitHub Pages that uses a custom domain and HTTPS!
Cloudflare also has a CDN (content delivery network) allowing optimised delivery of content to your visitors. By default HTML content isn't cached, but you can add a page rule to do this.
In the "Page Rules" section of the dashboard, create a page rule. In the URL box, put your domain followed by an asterisk. For me that would be https://lukeb.co.uk/*
. Then in the settings section, select "Cache Level" from the drop down and then set the cache level to "Cache Everything". Save and deploy this rule.
You can also enable Cloudflare's Always Online™ feature that will serve your site's static pages from their cache if GitHub has an outage. You can enable this in the "Caching" section of the dashboard.
]]>This year I was lucky enough to be picked to speak at JSConf Budapest (video), my first time speaking at a large conference. The event was awesome and I'm so happy that I got to be part of such an excellent lineup and meet so many great people. Also Budapest is a beautiful city, I need to go back.
I was also asked to speak at the Leeds Testing Atelier, a community-run testing conference, where I spoke about snapshot testing (video). Again, this was a great experience!
Rounding out my speaking this year, I was asked at short notice to speak GDG Devfest Coimbra. I jumped at the opportunity to speak again and was glad that I did! The event was a mix of web, Android and a few other things, all really interesting! Again I got to meet some fantastic people and have some great conversations. I didn't get to spend a lot of time in Portugal, but I definitely want to go back.
I'm really fortunate that my manager has been super supportive as I've been doing this, allowing me to take the time off work without having to use holiday as well as giving some great feedback. My friends and co-workers have been really supportive too.
LeedsJS is the JavaScript community group that I run in Leeds and this has been our best year so far!
Towards the end of 2016 we tried having 2 talks per event and I felt it worked so well that this year we carried on with it!
We've had great talks on JavaScript topics such as React, Swagger and PWAs as well as some more human topics such as mental health and continuous delivery.
We also became an alcohol-free event as well as trying to get some more inclusive food choices (such as vegan and gluten free options). I've noticed that our attendees are a bit more diverse as a result, which is what I was aiming for!
I've been at Sky Betting & Gaming for 18 months now and it's flown by. I'm extremely fortunate to be working with a great team.
I've learned a lot over the past year and being allowed the time to learn stuff while working has been fantastic. As a result, I finally did some work with React this year. Having the opportunity to mentor has been amazing too!
A huge milestone was releasing the responsive site to 100% of customers. My team has been working on this the entire time that I've worked there, taking an experimental approach to make sure we got it right. I'm super proud of the work we've done.
Earlier in this year I released jscad-includify which is a Node module that builds the includes for a JSCAD project into one file. This was a great opportunity to learn more about ASTs as well as having a go at building a utility that works as both a cli tool and a Node module.
As a result of the work I did on jscad-includify, I got interested in snapshot testing. As I couldn't find a tool that would let me do snapshot testing with TAP (the Node test framework that I like), I ended up writing tapshot. This was an interesting challenge as I learned a lot about snapshot testing and had to ensure it was well tested itself.
I also rebuilt this website with the aim of keeping it more up to date and to blog more. You can read more about the website here.
Throughout the year I've also been working on my automation bot Woodhouse. I've not added a lot of major functionality and have mainly been working on tidying up the version 2 code, but I have been working on automating my heating over the past couple of months and will hopefully have a post up on that soon.
This year I've been far more aware of and open about my mental health.
I've taken 2 separate weeks to clean and declutter my house. During this time I cleared a load of junk out of my house, cleaned, donated a load of stuff to charity and set up a work area so that I can build stuff. This was with the aim of making my house a place I feel more at home in and somewhere I can feel comfortable.
A co-worker and I also pitched for Mind to be Sky Betting & Gaming's charity of the year for 2018. The winner will be announced early next year so fingers crossed that we win!
I have no real concrete aims for 2018, but I have a few rough ideas:
In the "Sources" panel in Chrome DevTools, open the JavaScript file that you want to debug.
Right click on the line number of the line you want to add the breakpoint to and select "Add conditional breakpoint".
Because the condition is executed each time that breakpoint is checked, you could even have it run other code. A great example would be when you're trying to log out some input from a certain point in code, but can't edit the source (e.g. when debugging built/live code).
]]>As my focus is no longer on PHP, I wanted to move towards something JavaScript based. I didn't want to use a client side framework to do this, as the content is pretty static and I feel it would be needless to require JavaScript to present static content.
I had a look around and chose Hexo as a static site generator. Because this just outputs HTML files, I am using GitHub Pages to host it. The site consists of 2 repos:
I have automated the build process using Travis CI. Any time I push a commit up to the website-hexo repo, the site is generated and pushed to a new branch on the website repo. A pull request is made and a preview is pushed up to ZEIT's Now. The code for all this can be found in the Travis config file in the website-hexo repo.
For the actual website, I wanted to use pixel art in the design. I really enjoy creating pixel art and love the style visually. I looked at some retro games for UI inspiration and put together a design that I'm pretty happy with.
A lot of the actual layout is achieved with flexbox. I love how powerful flexbox is and what you can achieve with it. Although it does sort of remind me of the table layouts that were popular when I started developing.
I took the opportunity to add a little animation using CSS, as it's not something I've done much and wanted to try more. I feel it's worked well!
The main functionality of the site uses no JavaScript, however there is an easter egg that does. The only hint I'll give is that you should follow the instructions in the header.
]]>