Canvas vs SVG: Choosing the Right Tool for the Job

Canvas vs SVG: Choosing the Right Tool for the Job – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

HTML5 Canvas and SVG are both standards-based HTML5 technologies that you can use to create amazing graphics and visual experiences. The question I’m asking in this article is the following: Should it matter which one you use in your project? In other words, are there any use cases for preferring HTML5 Canvas over SVG?

First, let’s spend a few words introducing HTML5 Canvas and SVG.

What Is HTML5 Canvas?

Here’s how the WHATWG specification introduces the canvas element:

The canvas element provides scripts with a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, art, or other visual images on the fly.

In other words, the <canvas> tag exposes a surface where you can create and manipulate rasterized images pixel by pixel using a JavaScript programmable interface.

Here’s a basic code sample:

See the Pen
Basic Canvas Shape Demo
by SitePoint (@SitePoint)
on CodePen.


<canvas id="myCanvas" width="800" height="800"></canvas>

The JavaScript:

const canvas = document.getElementById('myCanvas');
const context = canvas.getContext('2d');
context.fillStyle = '#c00';
context.fillRect(10, 10, 100, 100);

You can take advantage of the HTML5 Canvas API methods and properties by getting a reference to the 2D context object. In the example above, I’ve drawn a simple red square, 100 x 100 pixels in size, placed 10px from the left and 10px from the top of the <canvas> drawing surface.

Red square drawn using HTML5 Canvas

Being resolution-dependent, images you create on <canvas> may lose quality when enlarged or displayed on Retina Displays.

Drawing simple shapes is just the tip of the iceberg. The HTML5 Canvas API allows you to draw arcs, paths, text, gradients, etc. You can also manipulate your images pixel by pixel. This means that you can replace one color with another in certain areas of the graphics, you can animate your drawings, and even draw a video onto the canvas and change its appearance.

What Is SVG?

SVG stands for Scalable Vector Graphics. According to the specification:

SVG is a language for describing two-dimensional graphics. As a standalone format or when mixed with other XML, it uses the XML syntax. When mixed with HTML5, it uses the HTML5 syntax. …

SVG drawings can be interactive and dynamic. Animations can be defined and triggered either declaratively (i.e., by embedding SVG animation elements in SVG content) or via scripting.

SVG is an XML file format designed to create vector graphics. Being scalable has the advantage of letting you increase or decrease a vector image while maintaining its crispness and high quality. (This can’t be done with HTML5 Canvas-generated images.)

Here’s the same red square (previously created with HTML5 Canvas) this time drawn using SVG:

See the Pen
Basic SVG Shape
by SitePoint (@SitePoint)
on CodePen.

<svg xmlns="" viewbox="0 0 600 600"> <desc>Red rectangle shape</desc> <rect x="10" y="10" width="100" height="100" fill="#c00" /> </svg>

You can do with SVG most of the stuff you can do with Canvas — such as drawing shapes and paths, gradients, patterns, animation, and so on. However, these two technologies work in fundamentally different ways. Unlike a Canvas-based graphic, SVG has a DOM, and as such both CSS and JavaScript have access to it. For example, you can change the look and feel of an SVG graphic using CSS, animate its nodes with CSS or JavaScript, make any of its parts respond to a mouse or a keyboard event the same as a <div>. As it will become clearer in the following sections, this difference plays a significant part when you need to make a choice between Canvas and SVG for your next project.

It’s crucial to distinguish between immediate mode and retained mode. HTML5 Canvas is an example of the former, SVG of the latter.

Immediate mode means that, once your drawing is on the canvas, the canvas stops keeping track of it. In other words you, as the developer, you need to work out the commands to draw objects, create and maintain the model or scene of what the final output should look like, and specify what needs to be updated. The browser’s Graphics API simply communicates your drawing commands to the browser, which then executes them.

SVG uses the retained approach, where you simply issue your drawing instructions of what you want to display on the screen, and the browser’s Graphics API creates an in-memory model or scene of the final output and translates it into drawing commands for your browser.

Being an immediate graphics system, Canvas hasn’t got a DOM, or Document Object Model. With Canvas, you draw your pixels and the system forgets all about them, thereby cutting down on the extra memory needed to maintain an internal model of your drawing. With SVG, each object you draw gets added to the browser’s internal model, which makes your life as a developer somewhat easier, but at some costs in terms of performance.

On the basis of the distinction between immediate and retained mode, as well as other specific characteristics of Canvas and SVG respectively, it’s possible to outline some cases where using one technology over the other might better serve your project’s goals.

HTML5 Canvas: Pros and Cons

The HTML5 Canvas specification clearly recommends that authors should not use the <canvas> element where they have other more suitable means available to them.

For instance, for a graphically rich <header> element, the best tools are HTML and CSS, not <canvas> (and neither is SVG, by the way). Dr Abstract, creator and founder of the Zim JS canvas framework, confirms this view as he writes:

The DOM and the DOM frameworks are good at displaying information and in particular text information. In comparison, the canvas is a picture. Text can be added to the canvas and it can be made responsive, but the text cannot be readily selected or searched as on the DOM. Long scrolling pages of text or text and images is an excellent use of the DOM. The DOM is great for social media sites, forums, shopping, and all the information apps we are used to. — “When to Use a JavaScript Canvas Library or Framework

So, that’s a clear-cut example of when not to use <canvas>. But when is <canvas> a good option?

This tweet by Sarah Drasner summarizes some major pros and cons of canvas features and capabilities, which can help us to work out some use cases for this powerful technology:

You can view the image from the tweet here.

What HTML5 Canvas Can Be Great For

Alvin Wan has benchmarked Canvas and SVG in terms of performance both in relation to the number of objects being drawn and in relation to the size of the objects or the canvas itself. He states his results as follows:

In sum, the overhead of DOM rendering is more poignant when juggling hundreds if not thousands of objects; in this scenario, Canvas is the clear winner. However, both the canvas and SVG are invariant to object sizes. Given the final tally, the canvas offers a clear win in performance.

Drawing on what we know about Canvas, especially its excellent performance at drawing lots of objects, here are some possible scenarios where it might be appropriate, and even preferable to SVG.

Games and Generative Art

For graphics-intensive, highly interactive games, as well as for generative art, Canvas is generally the way to go.

Ray Tracing

Ray tracing is a technique for creating 3D graphics.

Ray tracing can be used to hydrate an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. … The effects achieved by ray tracing, … range from … creating realistic images from otherwise simple vector graphics to applying photo-like filters to remove red-eye. — SVG vs Canvas: how to choose, Microsoft Docs.

If you’re curious, here’s a raytracer application in action by Mark Webster.

Canvas Raytracer by Mark Webster

However, although HTML5 Canvas is definitely better suited to the task than SVG could ever be, it doesn’t necessarily follow that ray tracing is best executed on a <canvas> element. In fact, the strain on the CPU could be quite considerable, to the point that your browser could stop responding.

Drawing a Significant Number of Objects on a Small Surface

Another example is a scenario where your application needs to draw a significant number of objects on a relatively small surface — such as non-interactive real-time data visualizations like graphical representations of weather patterns.

Graphical representation of weather patterns with HTML5 Canvas

The above image is from this MSDN article, which was part of my research for this piece.

Pixel Replacement in Videos

As demonstrated in this HTML5 Doctor article, another example where Canvas would be appropriate is when replacing a video background color with a different color, another scene, or image.

Replacing pixels in a video to convert it to grayscale on the fly using HTML5 Canvas

What HTML5 Canvas Isn’t So Great For

On the other hand, there are a number of cases where Canvas might not be the best choice compared to SVG.


Most scenarios where scalability is a plus are going to be better served using SVGs rather than Canvas. High-fidelity, complex graphics like building and engineering diagrams, organizational charts, biological diagrams, etc., are examples of this.

When drawn using SVG, enlarging the images or printing them preserves all the details to a high level of quality. You can also generate these documents from a database, which makes the XML format of SVG highly suited to the task.

Also, these graphics are often interactive. Think about seat maps when you’re booking your plane ticket online as an example, which makes them a great use case for a retained graphics system like SVG.

That said, with the help of some great libraries like CreateJS and Zim (which extends CreateJS), developers can relatively quickly integrate mouse events, hit tests, multitouch gestures, drag and drop capabilities, controls, and more, to their Canvas-based creations.


Although there are steps you can take to make a Canvas graphic more accessible — and a good Canvas library like Zim could be helpful here to speed up the process — canvas doesn’t shine when it comes to accessibility. What you draw on the canvas surface is just a bunch of pixels, which can’t be read or interpreted by assistive technologies or search engine bots. This is another area where SVG is preferable: SVG is just XML, which makes it readable by both humans and machines.

No Reliance on JavaScript

If you don’t want to use JavaScript in your application, then Canvas isn’t your best friend. In fact, the only way you can work with the <canvas> element is with JavaScript. Conversely, you can draw SVG graphics using a standard vector editing program like Adobe Illustrator or Inkscape, and you can use pure CSS to control their appearance and perform eye-catching, subtle animations and microinteractions.

Combining HTML5 Canvas and SVG for Advanced Scenarios

There are cases where your application can get the best of both worlds by combining HTML5 Canvas and SVG. For instance, a Canvas-based game could implement sprites from SVG images generated by a vector editing program, to take advantage of the scalability and reduced download size compared to a PNG image. Or a paint program could have its user interface designed using SVG, with an embedded <canvas>element, which could be used for drawing.

Finally, a powerful library like Paper.js makes it possible to script vector graphics on top of HTML5 Canvas, thereby offering devs a convenient way of working with both technologies.


In this article, I’ve explored some key features of both HTML5 Canvas and SVG to help you decide which technology might be most suited suited to particular tasks.

What’s the answer?

Chris Coyier agrees with Benjamin De Cock, as the latter tweets:

Dr Abstract offers a long list of things that are best built using Canvas, including interactive logos and advertising, interactive infographics, e-learning apps, and much more.

In my view, there aren’t any hard and fast rules for when it’s best to use Canvas instead of SVG. The distinction between immediate and retained mode points to HTML5 Canvas as being the undisputed winner when it comes to building things like graphic-intensive games, and to SVG as being preferable for things like flat images, icons and UI elements. However, in between there’s room for HTML5 Canvas to expand its reach, especially considering what the various powerful Canvas libraries could bring to the table. Not only do they make it easier to work with both technologies in the same graphic work, but they also make it so that some hard-to-implement features in a native Canvas-based project — like interactive controls and events, responsiveness, accessibility features, and so on — could now be at most developers’ fingertips, which leaves the door open to the possibility of ever more interesting uses of HTML5 Canvas.



How to Use PostCSS as a Configurable Alternative to Sass

How to Use PostCSS as a Configurable Alternative to Sass – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

Web developers love the Sass CSS preprocessor. According to the Sass opinions in the State of CSS Survey, every developer knows what it is, 89% use it regularly, and 88% have high satisfaction.

Many web bundlers include Sass processing, but you may also be using PostCSS without realizing it. PostCSS is primarily known for its Autoprefixer plugin, which automatically adds -webkit, -moz, and -ms vendor prefixes to CSS properties when required. Its plugin system means it can do so much more … such as compiling .scss files without having to use the Sass compiler.

This tutorial explains how to create a custom CSS preprocessor which compiles Sass syntax and supplements it with further features. It’s ideal for anyone with specific CSS requirements who knows a little Node.js.

Quick Start

An example PostCSS project can be cloned from GitHub. It requires Node.js, so run npm install to fetch all dependencies.

Compile the demonstration src/scss/main.scss source code to build/css/main.css using:

npm run css:dev

Auto-compile whenever files are changed using:

npm run css:watch

Then exit watching by pressing Ctrl | Cmd + C in the terminal.

Both options also create a source map at build/css/, which references the original source files in the developer tools.

Production-level minified CSS without a source map can be compiled using:

npm run css:build

Refer to the file for further information.

Should You Replace Sass with PostCSS?

There’s nothing wrong with the Sass compiler, but consider the following factors.

Module Dependencies

The latest Dart version of Sass can be installed globally using the Node.js npm package manager:

npm install -g sass

Compile Sass .scss code with:

sass [input.scss] [output.css]

Source maps are automatically generated (--no-source-map will switch them off) or --watch can be added to auto-compile source files when they change.

The latest version of Sass requires less than 5MB of installation space.

PostCSS should require fewer resources and a basic Sass-like compiler with auto-prefixing, and minification needs less than 1MB of space. In reality, your node_modules folder will expand to more than 60MB and increase rapidly as more plugins are added. This is mostly npm installing other dependencies. Even though PostCSS may not use them, it can’t be considered as a lightweight alternative.

However, if you’re already using PostCSS for Autoprefixer or other purposes, Sass may not be necessary.

Processing Speed

The slow, Ruby-based Sass compiler has long gone and the latest edition uses a compiled Dart runtime. It’s fast.

PostCSS is pure JavaScript and, while benchmarks will differ, it can be three times slower at compiling the same source code.

However, this speed difference will be less noticeable if you’re already running PostCSS after Sass. A two-stage process can be slower than using PostCSS alone, since much of its work involves tokenizing CSS properties.


The Sass language includes a large set of features including variables, nesting, partials, mixins, and more. There are downsides:

  1. You cannot easily add new features.

    Perhaps you’d like an option convert HSLA colors to RGB. It may be possible to write a custom function, but other requirements will be impossible — such as inlining an SVG as a background image.

  2. You can’t easily restrict the feature set.

    Perhaps you’d prefer your team not to use nesting or @extend. Linting rules will help, but they won’t stop Sass compiling valid .scss files.

PostCSS is considerably more configurable.

On its own, PostCSS does nothing. Processing functionality requires one or more of the many plugins available. Most perform a single task, so if you don’t want nesting, don’t add a nesting plugin. If necessary, you can write your own plugins in a standard JavaScript module that can harness the power of Node.js.

Install PostCSS

PostCSS can be used with webpack, Parcel, Gulp.js, and other build tools, but this tutorial shows how to run it from the command line.

If necessary, initialize a new Node.js project with npm init. Set up PostCSS by installing the following modules for basic .scss parsing with plugins for partials, variables, mixins, nesting, and auto-prefixing:

npm install --save-dev postcss postcss-cli postcss-scss postcss-advanced-variables postcss-nested autoprefixer

Like the example project, PostCSS and its plugins are installed locally. This is a practical option if your projects are likely to have differing compilation requirements.

Note: PostCSS can only be run from a JavaScript file, but the postcss-cli module provides a wrapper that can be called from the command line. The postcss-scss module allows PostCSS to read .scss files but doesn’t transform them.

Autoprefixer Configuration

Autoprefixer uses browserslist to determine which vendor prefixes are required according to your list of supported browsers. It’s easiest to define this list as a "browserslist" array in package.json. The following example adds vendor prefixes where any browser has at least 2% market share:

"browserslist": [ "> 2%"

Your First Build

You’ll typically have a single root Sass .scss file which imports all required partial/component files. For example:

 @import '_variables';
@import '_reset';
@import 'components/_card'; 

Compilation can be started by running npx postcss, followed by the input file, an --output file, and any required options. For example:

npx postcss ./src/scss/main.scss \ --output ./build/css/main.css \ --env development \ --map \ --verbose \ --parser postcss-scss \ --use postcss-advanced-variables postcss-nested autoprefixer

This command:

  1. parses ./src/scss/main.scss
  2. outputs to ./build/css/main.css
  3. sets the NODE_ENV environment variable to development
  4. outputs an external source map file
  5. sets verbose output and error messages
  6. sets the postcss-scss Sass parser, and
  7. uses the plugins postcss-advanced-variables, postcss-nested, and autoprefixer to handle partials, variables, mixins, nesting, and auto-prefixing

Optionally, you could add --watch to auto-compile when .scss files are modified.

Create a PostCSS Configuration File

The command line quickly becomes unwieldy for longer lists of plugins. You can define it as an npm script, but a PostCSS configuration file is an easier option that offers additional possibilities.

PostCSS configuration files are JavaScript module files named postcss.config.js and typically stored in the project’s root directory (or whichever directory you run PostCSS from). The module must export a single function:

module.exports = cfg => { };

It’s passed a cfg object with properties set by the postcss command. For example:

{ cwd: '/home/name/postcss-demo', env: 'development', options: { map: undefined, parser: undefined, syntax: undefined, stringifier: undefined }, file: { dirname: '/home/name/postcss-demo/src/scss', basename: 'main.scss', extname: '.scss' }

You can examine these properties and react accordingly — for example, determine whether you’re running in development mode and processing a .scss input file:

module.exports = cfg => { const dev = cfg.env === 'development', scss = cfg.file.extname === '.scss'; };

The function must return an object with property names matching the postcss-cli command line options. The following configuration file replicates the long quick start command used above:

module.exports = cfg => { const dev = cfg.env === 'development', scss = cfg.file.extname === '.scss'; return { map: dev ? { inline: false } : false, parser: scss ? 'postcss-scss' : false, plugins: [ require('postcss-advanced-variables')(), require('postcss-nested')(), require('autoprefixer')() ] }; };

PostCSS can now be run using a shorter command:

npx postcss ./src/scss/main.scss \ --output ./build/css/main.css \ --env development \ --verbose

Here are some things to note:

  • --verbose is optional: it’s not set in postcss.config.js.
  • Sass syntax parsing is only applied when the input is a .scss file. Otherwise, it defaults to standard CSS.
  • A source map is only output when --env is set to development.
  • --watch can still be added for auto-compilation.

If you’d prefer postcss.config.js to be in another directory, it can be referenced with the --config option — such as --config /mycfg/. In the example project, the configuration above is located in config/postcss.config.js. It’s referenced by running npm run css:basic, which calls:

npx postcss src/scss/main.scss \ --output build/css/main.css \ --env development \ --verbose \ --config ./config/

Adding Further Plugins

The following sections provide examples of PostCSS plugins which either parse additional .scss syntax or provide processing beyond the scope of the Sass compiler.

Use Design Tokens

Design Tokens are a technology-agnostic way to store variables such as corporation-wide fonts, colors, spacing, etc. You could store token name–value pairs in a JSON file:

{ "font-size": "16px", "font-main": "Roboto, Oxygen-Sans, Ubuntu, sans-serif", "lineheight": 1.5, "font-code": "Menlo, Consolas, Monaco, monospace", "lineheight-code": 1.2, "color-back": "#f5f5f5", "color-fore": "#444" }

Then reference them in any web, Windows, macOS, iOS, Linux, Android, or other application.

Design tokens are not directly supported by Sass, but a JavaScript object with a variables property holding name–value pairs can be passed to the existing postcss-advanced-variables PostCSS plugin:

module.exports = cfg => { const variables = require('./tokens.json'); const dev = cfg.env === 'development', scss = cfg.file.extname === '.scss'; return { map: dev ? { inline: false } : false, parser: scss ? 'postcss-scss' : false, plugins: [ require('postcss-advanced-variables')({ variables }), require('postcss-nested')(), require('autoprefixer')() ] }; };

The plugin converts all values to global Sass $variables which can be used in any partial. A fallback value can be set to ensure a variable is available even when it’s missing from tokens.json. For example:

 $color-back: #fff !default;

Token variables can then be referenced in any .scss file. For example:

body { font-family: $font-main; font-size: $font-size; line-height: $lineheight; color: $color-fore; background-color: $color-back;

In the example project, a token.json file is defined, which is loaded and used when running npm run css:dev.

Add Sass Map Support

Sass Maps are key–value objects. The map-get function can look up values by name.

The following example defines media query breakpoints as a Sass map with a respond mixin to fetch a named value:

$breakpoint: ( 'small': 36rem, 'medium': 50rem, 'large': 64rem
); @mixin respond($bp) { @media (min-width: map-get($breakpoint, $bp)) { @content; }

Default properties and media query modifications can then be defined in the same selector. For example:

main { width: 100%; @include respond('medium') { width: 40em; }

Which compiles to CSS:

main { width: 100%;
} @media (min-width: 50rem) { main { width: 40em } }

The postcss-map-get plugin adds Sass map processing. Install it with:

npm install --save-dev postcss-map-get

And update the postcss.config.js configuration file:

module.exports = cfg => { const variables = require('./tokens.json'); const dev = cfg.env === 'development', scss = cfg.file.extname === '.scss'; return { map: dev ? { inline: false } : false, parser: scss ? 'postcss-scss' : false, plugins: [ require('postcss-advanced-variables')({ variables }), require('postcss-map-get')(), require('postcss-nested')(), require('autoprefixer')() ] }; };

Add Media Query Optimization

Since we’ve added media queries, it would be useful to combine and sort them into mobile-first order. For example, the following CSS:

@media (min-width: 50rem) { main { width: 40em; } } @media (min-width: 50rem) { #menu { width: 30em; } }

can be merged to become:

@media (min-width: 50rem) { main { width: 40em; } #menu { width: 30em; } }

This isn’t possible in Sass, but can be achieved with the PostCSS postcss-sort-media-queries plugin. Install it with:

npm install --save-dev postcss-sort-media-queries

Then add it to postcss.config.js:

module.exports = cfg => { const variables = require('./tokens.json'); const dev = cfg.env === 'development', scss = cfg.file.extname === '.scss'; return { map: dev ? { inline: false } : false, parser: scss ? 'postcss-scss' : false, plugins: [ require('postcss-advanced-variables')({ variables }), require('postcss-map-get')(), require('postcss-nested')(), require('postcss-sort-media-queries')(), require('autoprefixer')() ] }; };

Add Asset Processing

Asset management is not available in Sass, but postcss-assets makes it easy. The plugin resolves CSS image URLs, adds cache-busting, defines image dimensions, and inlines files using base64 notation. For example:

#mybackground { background-image: resolve('back.png'); width: width('back.png'); height: height('back.png'); background-size: size('back.png');

compiles to:

#mybackground { background-image: url('/images/back.png'); width: 600px; height: 400px; background-size: 600px 400px;

Install the plugin with:

npm install --save-dev postcss-assets

Then add it to postcss.config.js. In this case, the plugin is instructed to locate images in the src/images/ directory:

module.exports = cfg => { const variables = require('./tokens.json'); const dev = cfg.env === 'development', scss = cfg.file.extname === '.scss'; return { map: dev ? { inline: false } : false, parser: scss ? 'postcss-scss' : false, plugins: [ require('postcss-advanced-variables')({ variables }), require('postcss-map-get')(), require('postcss-nested')(), require('postcss-sort-media-queries')(), require('postcss-assets')({ loadPaths: ['src/images/'] }), require('autoprefixer')() ] }; };

Add Minification

cssnano sets the standard for CSS minification. Minification can take more processing time than other plugins, so it can be applied in production only.

Install cssnano with:

npm install --save-dev cssnano

Then add it to postcss.config.js. In this case, minification only occurs when NODE_ENV is set to anything other than development:

module.exports = cfg => { const variables = require('./tokens.json'); const dev = cfg.env === 'development', scss = cfg.file.extname === '.scss'; return { map: dev ? { inline: false } : false, parser: scss ? 'postcss-scss' : false, plugins: [ require('postcss-advanced-variables')({ variables }), require('postcss-map-get')(), require('postcss-nested')(), require('postcss-sort-media-queries')(), require('postcss-assets')({ loadPaths: ['src/images/'] }), require('autoprefixer')(), dev ? null : require('cssnano')() ] }; };

Setting --env to prodution triggers minification (and removes the source map):

npx postcss ./src/scss/main.scss \ --output ./build/css/main.css \ --env prodution \ --verbose

In the example project, production CSS can be compiled by running npm run css:build.

Progress to PostCSS?

PostCSS is a powerful and configurable tool that can compile .scss files and enhance (or restrict) the standard Sass language. If you’re already using PostCSS for Autoprefixer, you may be able to remove the Sass compiler entirely while retaining the syntax you love.

Further links:



Build a Twitter Clone Using TypeScript, Prisma and Next.js

Build a Twitter Clone Using TypeScript, Prisma and Next.js – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

The best way to learn a tool like React is to build something with it. Next.js is a powerful framework that helps you build for production. In this tutorial, we’ll learn how to build a clone of Twitter using Next.js and Prisma.

Our app will have the following features:

  • authentication using NextAuth and Twitter OAuth
  • an option to add a new tweet
  • an option to view a list of tweets
  • an option to view the profile of a user with only their tweets

The code for the app we’ll be building is available on GitHub. We’ll be using TypeScript to build our app.


Next.js is one of the most popular React.js frameworks. It has a lot of features like server-side rendering, TypeScript support, image optimization, I18n support, file-system routing, and more.

Prisma is an ORM for Node.js and TypeScript. It also provides a lot of features like raw database access, seamless relation API, native database types, and so on.

Software required

We’ll need the following installed for the purposes of running our app:

These technologies will be used in the app:

  • Next.js: for building our app
  • Prisma: for fetching and saving data into the database
  • Chakra UI: for adding styles to our app
  • NextAuth: for handling authentication
  • React Query: for fetching and updating data in our app

Creating a new Next.js App

Now, let’s get started! We’ll first create a new Next.js app by running the following command from our terminal:

yarn create next-app

We’ll need to enter the name of the app when the command prompts for it. We can name it anything we want. However, in this case, I’ll name it twitter-clone. We should be able to see a similar output on our terminal:

$ yarn create next-app yarn create v1.22.5
[1/4] 🔍 Resolving packages...
[2/4] 🚚 Fetching packages...
[3/4] 🔗 Linking dependencies...
[4/4] 🔨 Building fresh packages... success Installed "create-next-app@10.0.4" with binaries: - create-next-app
✔ What is your project named? twitter-clone
Creating a new Next.js app in /twitter-clone. .... Initialized a git repository. Success! Created twitter-clone at /twitter-clone
Inside that directory, you can run several commands: yarn dev Starts the development server. yarn build Builds the app for production. yarn start Runs the built app in production mode. We suggest that you begin by typing: cd twitter-clone yarn dev

We can now go inside the twitter-clone directory and start our app by running the following command:

cd twitter-clone && yarn dev

Our Next.js app should be up and running on http://localhost:3000. We should be able to see the following screen:

Next.js app running on localhost:3000

Adding a Dockerized PostgreSQL Database

Next, let’s add a Dockerized PostgreSQL database so that we can save the users and tweets into it. We can create a new docker-compose.yml file in the root of our app with the following content:

version: "3" services: db: container_name: db image: postgres:11.3-alpine ports: - "5432:5432" volumes: - db_data:/var/lib/postgresql/data restart: unless-stopped volumes: db_data:

If Docker is running on our machine, we can execute the following command from the root of our app to start our PostgreSQL container:

docker-compose up

The above command will start the PostgreSQL container and it can be accessed on postgresql://postgres:@localhost:5432/postgres. Note that you can also use a local installation of Postgres instead of a Dockerized one.

Adding Chakra UI

Chakra UI is a very simple React.js component library. It’s very popular and has the features like accessibility, support for both light and dark mode, and more. We’ll be using Chakra UI for styling our user interface. We can install that package by running the following command from the root of our app:

yarn add @chakra-ui/react @emotion/react @emotion/styled framer-motion

Let’s rename our _app.js file to _app.tsx inside the pages directory and replace its content with the following:

 import { ChakraProvider } from "@chakra-ui/react";
import { AppProps } from "next/app";
import Head from "next/head";
import React from "react"; const App = ({ Component, pageProps }: AppProps) => { return ( <> <Head> <link rel="shortcut icon" href="/images/favicon.ico" /> </Head> <ChakraProvider> <Component {...pageProps} /> </ChakraProvider> </> );
}; export default App;

Since we added a new TypeScript file, we’ll need to restart our Next.js server. Once we restart our server, we’ll get the following error:

$ yarn dev yarn run v1.22.5
$ next dev
ready - started server on http://localhost:3000
It looks like you're trying to use TypeScript but do not have the required package(s) installed. Please install typescript, @types/react, and @types/node by running: yarn add --dev typescript @types/react @types/node If you are not trying to use TypeScript, please remove the tsconfig.json file from your package root (and any TypeScript files in your pages directory).

This is because we added a new TypeScript file but didn’t add the necessary dependencies that are required to run them. We can fix that by installing the missing dependencies. From the root of our app, we can execute the following command to install the missing dependencies:

yarn add --dev typescript @types/react @types/node

Now, if we start our Next.js server, our app should compile:

$ yarn dev yarn run v1.22.5
$ next dev
ready - started server on http://localhost:3000
We detected TypeScript in your project and created a tsconfig.json file for you. event - compiled successfully

Adding NextAuth

NextAuth is an authentication library for Next.js. It’s simple and easy to understand, flexible and secure by default. To set up NextAuth in our app, we’ll need to install it by running the following command from the root of our app:

yarn add next-auth

Next, we’ll have to update our pages/_app.tsx file with the following content:

 import { ChakraProvider } from "@chakra-ui/react";
import { Provider as NextAuthProvider } from "next-auth/client";
import { AppProps } from "next/app";
import Head from "next/head";
import React from "react"; const App = ({ Component, pageProps }: AppProps) => { return ( <> <Head> <link rel="shortcut icon" href="/images/favicon.ico" /> </Head> <NextAuthProvider session={pageProps.session}> <ChakraProvider> <Component {...pageProps} /> </ChakraProvider> </NextAuthProvider> </> );
}; export default App;

Here, we’re wrapping our app with NextAuthProvider. Next, we’ll have to create a new file named [...nextauth].ts inside the pages/api/auth directory with the following content:

 import { NextApiRequest, NextApiResponse } from "next";
import NextAuth from "next-auth";
import Providers from "next-auth/providers"; const options = { providers: [ Providers.Twitter({ clientId: process.env.TWITTER_KEY, clientSecret: process.env.TWITTER_SECRET, }), ],
}; export default NextAuth(options);

The above file will be responsible for handling our authentication using Next.js API routes. Next, we’ll create a new filed named .env in the root of our app to store all our environment variables with the following content:


The Twitter environment variables will be generated from the Twitter API. We’ll be doing that next. We can create a new Twitter app from the Twitter Developer dashboard.

  1. Create a new Twitter app by entering its name and click on the Complete button.

    Create a new Twitter app

  2. Copy the API key, API secret key and Bearer token in the next screen.

    The credentials of our Twitter app

  3. Change the App permissions from Read Only to Read and Write in the next screen.

    Twitter app permissions

  4. Click on the Edit button next to the Authentication settings to enable 3-legged OAuth.

    Authentication settings for our Twitter app

  5. Enable 3-legged OAuth and Request email address from users and add http://localhost:3000/api/auth/callback/twitter as a Callback URL.

    Edit the authentication settings of our Twitter app

  6. The Website URL, Terms of service and Privacy policy files can be anything (such as, and respectively).

Our 3-legged OAuth should be enabled now.

Enable the 3-legged OAuth of our Twitter app

Paste the value of the API key from Step 2 into the TWITTER_KEY environment variable and the value of API secret key into the TWITTER_SECRET environment variable.

Our .env file should look like this now:

TWITTER_KEY="1234" // Replace this with your own API key
TWITTER_SECRET="secret" // Replaces this with your own API secret key

Now, if we restart our Next.js server and visit http://localhost:3000/api/auth/signin, we should be able to see the Sign in with Twitter button:

Sign in with Twitter button

If we click on that button, we’ll be able to authorize our Twitter app but we won’t be able to log in to our app. Our terminal will show the following error:


We’ll fix this issue next when we’ll be adding and configuring Prisma.

Adding and Configuring Prisma

First, we need to install all the necessary dependencies. We can do that by running the following command from the root of our app:

yarn add prisma @prisma/client

Next, let’s create a new file named prisma.ts inside the lib/clients directory with the following content:

 import { PrismaClient } from "@prisma/client"; const prisma = new PrismaClient(); export default prisma;

This PrismaClient will be re-used across multiple files. Next, we’ll have to update our pages/api/auth/[...nextauth].ts file with the following content:

.... import prisma from "../../../lib/clients/prisma";
import Adapters from "next-auth/adapters"; .... const options = { providers: [ .... ], adapter: Adapters.Prisma.Adapter({ prisma }),
}; ....

Now, if we visit http://localhost:3000/api/auth/signin, we’ll get the following error on our terminal:

Error: @prisma/client did not initialize yet. Please run "prisma generate" and try to import it again.

To fix this issue, we’ll have to do the following:

  1. Run npx prisma init from the root of our app:
 $ npx prisma init Environment variables loaded from .env ✔ Your Prisma schema was created at prisma/schema.prisma. You can now open it in your favorite editor. warn Prisma would have added DATABASE_URL="postgresql://johndoe:randompassword@localhost:5432/mydb?schema=public" but it already exists in .env Next steps: 1. Set the DATABASE_URL in the .env file to point to your existing database. If your database has no tables yet, read 2. Set the provider of the datasource block in schema.prisma to match your database: postgresql, mysql or sqlite. 3. Run prisma introspect to turn your database schema into a Prisma data model. 4. Run prisma generate to install Prisma Client. You can then start querying your database. More information in our documentation:
  1. Run npx prisma generate from the root of our app:
 $ npx prisma generate 4s Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma Error: You don't have any models defined in your schema.prisma, so nothing will be generated. You can define a model like this: model User { id Int @id @default(autoincrement()) email String @unique name String? } More information in our documentation:
  1. Update the prisma/schema.prisma file with the schema that NextAuth expects:
 // prisma/schema.prisma generator client { provider = "prisma-client-js" } datasource db { provider = "postgresql" url = env("DATABASE_URL") } model Account { id Int @id @default(autoincrement()) compoundId String @unique @map("compound_id") userId Int @map("user_id") providerType String @map("provider_type") providerId String @map("provider_id") providerAccountId String @map("provider_account_id") refreshToken String? @map("refresh_token") accessToken String? @map("access_token") accessTokenExpires DateTime? @map("access_token_expires") createdAt DateTime @default(now()) @map("created_at") updatedAt DateTime @default(now()) @map("updated_at") @@index([providerAccountId], name: "providerAccountId") @@index([providerId], name: "providerId") @@index([userId], name: "userId") @@map("accounts") } model Session { id Int @id @default(autoincrement()) userId Int @map("user_id") expires DateTime sessionToken String @unique @map("session_token") accessToken String @unique @map("access_token") createdAt DateTime @default(now()) @map("created_at") updatedAt DateTime @default(now()) @map("updated_at") @@map("sessions") } model User { id Int @id @default(autoincrement()) name String? email String? @unique emailVerified DateTime? @map("email_verified") image String? createdAt DateTime @default(now()) @map("created_at") updatedAt DateTime @default(now()) @map("updated_at") tweets Tweet[] @@map("users") } model VerificationRequest { id Int @id @default(autoincrement()) identifier String token String @unique expires DateTime createdAt DateTime @default(now()) @map("created_at") updatedAt DateTime @default(now()) @map("updated_at") @@map("verification_requests") }
  1. Add the schema for Tweet in the prisma/schema.prisma file:
 // prisma/schema.prisma .... model Tweet { id Int @id @default(autoincrement()) body String userId Int createdAt DateTime @default(now()) @map("created_at") updatedAt DateTime @default(now()) @map("updated_at") author User @relation(fields: [userId], references: [id]) @@map("tweets") }
  1. Run npx prisma migrate dev --preview-feature from the root of our app to create a new migration. Enter the name of the migration (such as init-database) when prompted.

Now, if we visit http://localhost:3000/api/auth/signin and click on the Sign in with Twitter button, we’ll be logged in to our app using Twitter.

Adding Some Seed Data

So that the UI isn’t completely bare as we work on the app, let’s add some seed data.

Let’s start off by installing a couple of dependencies:

yarn add -D faker ts-node

This pulls in faker.js, which will aid us in generating fake data, as well as its ts-node dependency.

Next, create a new seed.ts file in the prisma folder, and add the following content:

import faker from "faker";
import prisma from "../lib/clients/prisma"; async function main() { const listOfNewUsers = [ Array(5)].map(() => { return { email:, name:, image: faker.image.image(), tweets: { create: { body: faker.lorem.sentence(), }, }, }; }); for (let data of listOfNewUsers) { const user = await prisma.user.create({ data, }); console.log(user); }
} main() .catch((e) => { console.error(e); process.exit(1); }) .finally(async () => { await prisma.$disconnect(); });

We’ll also need to update our tsconfig.json file, as shown:

{ "compilerOptions": { "target": "es5", "lib": [ "dom", "dom.iterable", "esnext" ], "allowJs": true, "skipLibCheck": true, "strict": false, "forceConsistentCasingInFileNames": true, "noEmit": true, "esModuleInterop": true, "module": "commonjs", "moduleResolution": "node", "resolveJsonModule": true, "isolatedModules": true, "jsx": "preserve", "baseUrl": ".", "paths": { "*": [ "/*" ], "components/*": [ "components/*" ], "pages/*": [ "pages/*" ], "types/*": [ "types/*" ], "lib/*": [ "lib/*" ], }, }, "include": [ "next-env.d.ts", "**/*.ts", "**/*.tsx" ], "exclude": [ "node_modules" ]

Finally, we can run npx prisma db seed --preview-feature to seed our database with some test data.

Adding React Query

React Query is a very popular and performant way of fetching data in React.js apps. Let’s add React Query to our app. We can install React Query by running the following command from the root of our app:

yarn add react-query

Next, let’s create a new file named react-query.ts inside the lib/clients directory with the following content:

 import { QueryClient } from "react-query"; const queryClient = new QueryClient(); export default queryClient;

We’ll also need to update our pages/_app.tsx file with the following content:

 .... import { QueryClientProvider } from "react-query";
import { Hydrate } from "react-query/hydration";
import queryClient from "../lib/clients/react-query"; const App = ({ Component, pageProps }: AppProps) => { return ( <QueryClientProvider client={queryClient}> <Hydrate state={pageProps.dehydratedState}> <Head> <link rel="shortcut icon" href="/images/favicon.ico" /> </Head> <NextAuthProvider session={pageProps.session}> <ChakraProvider> <Component {...pageProps} /> </ChakraProvider> </NextAuthProvider> </Hydrate> </QueryClientProvider> );
}; export default App;

Here, we’re wrapping our app with QueryClientProvider, which will provide a QueryClient to our app.

Option to View a List of Tweets

Let’s create a new file called fetch-tweets.ts inside the lib/queries directory, with the following content:

 const fetchTweets = async () => { const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/tweets`); const data = await res.json(); return data;
}; export default fetchTweets;

This function will be responsible for fetching all the tweets in our app. Next, create a new file called tweets.tsx inside the pages directory with the following content:

 import fetchTweets from "../lib/queries/fetch-tweets";
import queryClient from "../lib/clients/react-query";
import { GetServerSideProps, InferGetServerSidePropsType } from "next";
import { useSession } from "next-auth/client";
import Head from "next/head";
import React from "react";
import { useQuery } from "react-query";
import { dehydrate } from "react-query/hydration"; const TweetsPage: InferGetServerSidePropsType< typeof getServerSideProps
> = ({}) => { const { data } = useQuery("tweets", fetchTweets); const [session] = useSession(); if (!session) { return <div>Not authenticated.</div>; } return ( <> <Head> <title>All tweets</title> </Head> {console.log(JSON.stringify(data, null, 2))} </> );
}; export const getServerSideProps: GetServerSideProps = async ({ req }) => { await queryClient.prefetchQuery("tweets", fetchTweets); return { props: { dehydratedState: dehydrate(queryClient), }, };
}; export default TweetsPage;

getServerSideProps is a Next.js function that helps in fetching data on the server. Let’s also create a new file named index.ts inside the pages/api/tweets directory with the following content:

 import prisma from "../../../lib/clients/prisma";
import type { NextApiRequest, NextApiResponse } from "next"; export default async (req: NextApiRequest, res: NextApiResponse) => { if (req.method === "POST") { try { const { body } = req; const tweet = await prisma.tweet.create({ data: JSON.parse(body) }); return res.status(200).json(tweet); } catch (error) { return res.status(422).json(error); } } else if (req.method === "GET") { try { const tweets = await prisma.tweet.findMany({ include: { author: true, }, orderBy: [ { createdAt: "desc", }, ], }); return res.status(200).json(tweets); } catch (error) { return res.status(422).json(error); } } res.end();

Here, we’re checking the request. If it’s a POST request, we’re creating a new tweet. If it’s a GET request, we’re sending all the tweets with the details of author. Now, if we visit http://localhost:3000/tweets, we’ll view all the tweets in our browser’s console.

List of tweets from the API endpoint

Note that, as faker.js generates random data, what you see logged to your browser’s console will vary from the screenshot. We’ll add the option to add a tweet later.

Next, let’s build the user interface for showing the list of tweets. We can create a new file named index.tsx inside the components/pages/tweets directory with the following content:

 import { Box, Grid, Stack } from "@chakra-ui/react";
import Tweet from "./tweet";
import React from "react";
import ITweet from "types/tweet"; const TweetsPageComponent = ({ tweets }) => { return ( <Stack spacing={8}> <Grid templateColumns={["1fr", "1fr", "repeat(2, 1fr)"]} gap={8}> {tweets?.map((tweet: ITweet) => { return ( <Box key={}> <Tweet tweet={tweet} /> </Box> ); })} </Grid> </Stack> );
}; export default TweetsPageComponent;

Let’s also create a new file named tweet.tsx inside the same directory (components/pages/tweets) with the following content:

 import { Avatar, Box, Stack, Text } from "@chakra-ui/react";
import React, { FC } from "react"; const Tweet: FC = ({ tweet }) => { const authorNode = () => { return ( <Stack spacing={4} isInline alignItems="center" p={4} borderBottomWidth={1} > <Avatar name={} src={} /> <Stack> <Text fontWeight="bold">{}</Text> </Stack> </Stack> ); }; const bodyNode = () => { return ( <Text fontSize="md" p={4}> {tweet.body} </Text> ); }; return ( <Box shadow="lg" rounded="lg"> <Stack spacing={0}> {authorNode()} {bodyNode()} </Stack> </Box> );
}; export default Tweet;

Next, let’s update our pages/tweets.tsx file with the following content:

 .... import Page from "../components/pages/tweets"; .... const TweetsPage: InferGetServerSidePropsType< typeof getServerSideProps
> = ({}) => { .... return ( <> <Head> <title>All tweets</title> </Head> <Page tweets={data} /> </> ); .... } ....

Here, we’ve modified the interface of our app. Now, if we visit http://localhost:3000/tweets, we should be able to see the following:

List of tweets

Option to Add a New Tweet

Let’s add a text area through which we can add a new tweet. To do that, let’s create a new file named add-new-tweet-form.tsx inside the components/pages/tweets directory with the following content:

 import { Box, Button, FormControl, FormLabel, Stack, Textarea,
} from "@chakra-ui/react";
import saveTweet from "../../../lib/mutations/save-tweet";
import fetchTweets from "../../../lib/queries/fetch-tweets";
import queryClient from "../../../lib/clients/react-query";
import { useSession } from "next-auth/client";
import React, { ChangeEvent, useState } from "react";
import { useMutation, useQuery } from "react-query"; const AddNewTweetForm = () => { const [body, setBody] = useState(""); const [session] = useSession(); const { refetch } = useQuery("tweets", fetchTweets); const mutation = useMutation(saveTweet, { onSuccess: async () => { await queryClient.invalidateQueries("tweets"); refetch(); }, }); if (!session) { return <div>Not authenticated.</div>; } const handleSubmit = () => { const data = { body, author: { connect: { email: }, }, }; mutation.mutate(data); if (!mutation.error) { setBody(""); } }; return ( <Stack spacing={4}> <Box p={4} shadow="lg" rounded="lg"> <Stack spacing={4}> <FormControl isRequired> <FormLabel htmlFor="body">What's on your mind?</FormLabel> <Textarea id="body" value={body} onChange={(e: ChangeEvent<HTMLTextAreaElement>) => setBody(e.currentTarget.value) } /> </FormControl> <FormControl> <Button loadingText="Posting..." onClick={handleSubmit} isDisabled={!body.trim()} > Post </Button> </FormControl> </Stack> </Box> </Stack> );
}; export default AddNewTweetForm;

The mutation function is responsible for doing the POST request to the server. It also re-fetches the data once the request is successful. Also, let’s create a new file named save-tweet.ts inside the lib/mutations directory with the following content:

 const saveTweet = async (body: any) => { const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/tweets`, { method: "POST", body: JSON.stringify(body), }); const data = await res.json(); return data;
}; export default saveTweet;

We also need to modify our components/pages/tweets/index.tsx file with following content:

 .... import AddNewTweetForm from "./add-new-tweet-form"; .... const TweetsPageComponent = ({ tweets }) => { return ( <Stack spacing={8}> <Box> <AddNewTweetForm /> </Box> .... </Stack> );
}; export default TweetsPageComponent;

Now, we should be able to view a textarea if we visit http://localhost:3000/tweets:

Textarea to add new tweets

We should also be able to add a new tweet using the textarea (this won’t tweet to your actual account!):

Add a new tweet

Next, we’ll add the option to view the profile of a user which shows only the tweets posted by that user.

Option to View the Profile of a User with only Their Tweets

First, we’ll create a page that will show a list of all the users. To do that, we’ll need to create a new file named index.tsx inside the pages/users directory with the following content:

 import { GetServerSideProps, InferGetServerSidePropsType } from "next";
import { useSession } from "next-auth/client";
import Head from "next/head";
import React from "react";
import { useQuery } from "react-query";
import { dehydrate } from "react-query/hydration";
import Page from "../../components/pages/users";
import queryClient from "../../lib/clients/react-query";
import fetchUsers from "../../lib/queries/fetch-users"; const MyAccountPage: InferGetServerSidePropsType< typeof getServerSideProps
> = ({}) => { const { data } = useQuery("users", fetchUsers); const [session] = useSession(); if (!session) { return <div>Not authenticated.</div>; } return ( <> <Head> <title>All users</title> </Head> <Page users={data} /> </> );
}; export const getServerSideProps: GetServerSideProps = async ({ req }) => { await queryClient.prefetchQuery("users", fetchUsers); return { props: { dehydratedState: dehydrate(queryClient), }, };
}; export default MyAccountPage;

We’ll also need to create a new file named fetch-users.ts inside the lib/queries directory with the following content:

 const fetchUsers = async () => { const res = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/api/users`); const data = await res.json(); return data;
}; export default fetchUsers;

This function will be responsible for fetching all the users from the API endpoint. We’ll also need to create a new file named index.tsx inside the components/pages/users directory with the following content:

 import { Box, Grid, Stack } from "@chakra-ui/react";
import React from "react";
import User from "./user"; const UsersPageComponent = ({ users }) => { return ( <Stack spacing={8}> <Grid templateColumns={["1fr", "1fr", "repeat(2, 1fr)"]} gap={8}> {users?.map((user) => { return ( <Box key={}> <User user={user} /> </Box> ); })} </Grid> </Stack> );
}; export default UsersPageComponent;

Next, let’s create a file named user.tsx inside the same directory (components/pages/users) with the following content:

 import { Avatar, Box, Stack, Text, Button } from "@chakra-ui/react";
import Link from "next/link";
import React, { FC } from "react"; const User: FC = ({ user }) => { const authorNode = () => { return ( <Stack spacing={4} isInline alignItems="center" p={4} borderBottomWidth={1} > <Avatar name={} src={user.image} /> <Stack> <Text fontWeight="bold">{}</Text> </Stack> </Stack> ); }; const bodyNode = () => { return ( <Text fontSize="md" p={4}> {} </Text> ); }; const buttonNode = () => { return ( <Box p={4} borderTopWidth={1}> <Link href={`/users/${}`}> <Button>View profile</Button> </Link> </Box> ); }; return ( <Box shadow="lg" rounded="lg"> <Stack spacing={0}> {authorNode()} {bodyNode()} {buttonNode()} </Stack> </Box> );
}; export default User;

And one more file named index.ts inside the pages/api/users directory with the following content:

 import prisma from "../../../lib/clients/prisma";
import type { NextApiRequest, NextApiResponse } from "next"; export default async (req: NextApiRequest, res: NextApiResponse) => { if (req.method === "GET") { try { const users = await prisma.user.findMany({ orderBy: [ { createdAt: "desc", }, ], }); return res.status(200).json(users); } catch (error) { return res.status(422).json(error); } } res.end();

The above function is responsible for sending the details of all the users. Now, if we visit http://localhost:3000/users, we should be able to see a list of users:

List of users

Now, let’s create the page to show the details for a single user. To do that, we’ll need to create a new file named [id].tsx inside the pages/users directory with the following content:

 import Page from "../../components/pages/users/[id]";
import queryClient from "../../lib/clients/react-query";
import fetchUser from "../../lib/queries/fetch-user";
import { GetServerSideProps, InferGetServerSidePropsType } from "next";
import { getSession, useSession } from "next-auth/client";
import Head from "next/head";
import React from "react";
import { useQuery } from "react-query";
import { dehydrate } from "react-query/hydration"; const MyAccountPage: InferGetServerSidePropsType<typeof getServerSideProps> = ({ id,
}) => { const { data } = useQuery("user", () => fetchUser(parseInt(id as string))); const [session] = useSession(); if (!session) { return <div>Not authenticated.</div>; } return ( <> <Head> <title>{}'s profile</title> </Head> <Page user={data} /> </> );
}; export const getServerSideProps: GetServerSideProps = async ({ query }) => { await queryClient.prefetchQuery("user", () => fetchUser(parseInt( as string)) ); return { props: { dehydratedState: dehydrate(queryClient), id:, }, };
}; export default MyAccountPage;

The value of determines the id of the current user. We’ll also need to create a new file named fetch-user.ts inside the lib/queries directory with the following content:

 const fetchUser = async (userId: number) => { const res = await fetch( `${process.env.NEXT_PUBLIC_API_URL}/api/users/${userId}` ); const data = await res.json(); return data;
}; export default fetchUser;

The above function will be responsible for doing the GET request to the API endpoint. Next, we’ll need to create a new file named index.tsx inside the components/pages/users/[id] directory with the following content:

 import { Avatar, Box, Grid, Stack, Text } from "@chakra-ui/react";
import Tweet from "./tweet";
import React, { FC } from "react"; const UsersPageComponent: FC = ({ user }) => { const authorNode = () => { return ( <Stack spacing={4} isInline alignItems="center"> <Avatar name={user?.name} src={user?.image} /> <Stack> <Text fontWeight="bold" fontSize="4xl"> {user?.name} </Text> </Stack> </Stack> ); }; return ( <Stack spacing={8}> {authorNode()} <Grid templateColumns={["1fr", "1fr", "repeat(2, 1fr)"]} gap={8}> {user? => { return ( <Box key={}> <Tweet tweet={tweet} /> </Box> ); })} </Grid> </Stack> );
}; export default UsersPageComponent;

Next, we’ll need to create one more file named tweet.tsx inside the same directory (components/pages/users/[id]) with the following content:

 import { Box, Stack, Text } from "@chakra-ui/react";
import React, { FC } from "react"; const Tweet: FC = ({ tweet }) => { const bodyNode = () => { return ( <Text fontSize="md" p={4}> {tweet.body} </Text> ); }; return ( <Box shadow="lg" rounded="lg"> <Stack spacing={0}>{bodyNode()}</Stack> </Box> );
}; export default Tweet;

Finally, we’ll need to create one more file named [id].ts inside the pages/api/users directory with the following content:

 import prisma from "../../../lib/clients/prisma";
import type { NextApiRequest, NextApiResponse } from "next"; export default async (req: NextApiRequest, res: NextApiResponse) => { if (req.method === "GET") { const userId = parseInt( as string); try { const tweets = await prisma.user.findUnique({ include: { tweets: true, }, where: { id: userId, }, }); return res.status(200).json(tweets); } catch (error) { console.log(error); return res.status(422).json(error); } } res.end();

The above function will be responsible for sending the details of the user whose id is the same as We’re converting it to a number, as Prisma requires it to be numeric. Now, if we visit http://localhost:3000/users and click on the View profile button for a user, we’ll be able to see a list of tweets posted by that user.

Profile of a user with all the tweets posted by that user


In this tutorial, we’ve learned how we can use Next.js and Prisma together to build a clone of Twitter. Obviously, Twitter consists of a lot of other features like retweet, comment and sharing functionalities for each tweet. However, this tutorial should provide the base for building such features.

The code for the app we built is available on GitHub. Feel free to check it out. You can also check out a live demo of the app we’ve been building here.



How I Built a Wheel of Fortune JavaScript Game for My Zoom Group

Building a Wheel of Fortune JavaScript Game for Zoom Calls – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

In this article, I describe how I developed a JavaScript “Wheel of Fortune” game to make online meetings via Zoom a little more fun during the global pandemic.

The current pandemic has forced many social activities to go virtual. Our local Esperanto group, for example, now meets online (instead of in person) for our monthly language study meetups. And as the group’s organizer, I’ve had to to re-think many of our activities because of the coronavirus. Previously, I could add watching a film, or even a stroll through the park, to our mix of activities in an effort to avoid fatigue (constant grammar drills don’t encourage repeat attendance).

Our new Wheel of Fortune game was well received. Of course, SitePoint is a tech blog, so I’ll be presenting an overview of what went into building a rudimentary version of the game to screenshare in our online meetings. I’ll discuss some of the trade-offs I made along the way, as well as highlight some possibilities for improvement and things I should have done differently in hindsight.

First Things First

If you’re from the United States, you’re probably already familiar with Wheel of Fortune, as it’s the longest-running American game show in history. (Even if you’re not in the United States, you’re probably familiar with some variant of the show, as it’s been adapted and aired in over 40 international markets.) The game is essentially Hangman: contestants try to solve a hidden word or phrase by guessing its letters. Prize amounts for each correct letter is determined by spinning a large roulette-style wheel bearing dollar amounts — and the dreaded Bankrupt spots. A contestant spins the wheel, guesses a letter, and any instances of said letter in the puzzle are revealed. Correct guesses earn the contestant another chance to spin and guess, while incorrect guesses advance game play to the next contestant. The puzzle is solved when a contestant successfully guesses the word or phrase. The rules and various elements of the game have been tweaked over the years, and you can certainly adapt your own version to the needs of your players.

For me, the first order of business was to decide how we physically (virtually) would play the game. I only needed the game for one or two meetings, and I wasn’t willing to invest a lot of time building a full-fledged gaming platform, so building the app as a web page that I could load locally and screenshare with others was fine. I would emcee the activity and drive the gameplay with various keystrokes based on what the players wanted. I also decided to keep score using pencil and paper —something I’d later regret. But in the end, plain ol’ JavaScript, a little bit of canvas, and a handful of images and sound effect files was all I needed to build the game.

The Game Loop and Game State

Although I was envisioning this as a “quick and dirty” project rather than some brilliantly coded masterpiece following every known best practice, my first thought was still to start building a game loop. Generally speaking, gaming code is a state machine that maintains variables and such, representing the current state of the game with some extra code bolted on to handle user input, manage/update the state, and render the state with pretty graphics and sound effects. Code known as the game loop repeatedly executes, triggering the input checks, state updates, and rendering. If you’re going to build a game properly, you’ll most likely be following this pattern. But I soon realized I didn’t need constant state monitoring/updating/rendering, and so I forwent the game loop in favor of basic event handling.

In terms of maintaining state, the code needed to know the current puzzle, which letters have been guessed already, and which view to display (either the puzzle board or the spinning wheel). Those would be globally available to any callback logic. Any activities within the game would be triggered when handling a keypress.

Here’s what the core code started to look like:

(function (appId) { const canvas = document.getElementById(appId); const ctx = canvas.getContext('2d'); let puzzles = []; let currentPuzzle = -1; let guessedLetters = []; let isSpinning = false; window.addEventListener('keypress', (evt) => { });

The Game Board and Puzzles

Wheel of Fortune’s game board is essentially a grid, with each cell in one of three states:

  • empty: empty cells aren’t used in the puzzle (green)
  • blank: the cell represents a hidden letter in the puzzle (white)
  • visible: the cell reveals a letter in the puzzle

One approach to writing the game would be to use an array representing the game board, with each element as a cell in one of those states, and rendering that array could be accomplished several different ways. Here’s one example:

let puzzle = [...'########HELLO##WORLD########']; const cols = 7;
const width = 30;
const height = 35; puzzle.forEach((letter, index) => { let x = width * (index % cols); let y = height * Math.floor(index / cols); ctx.fillStyle = (letter === '#') ? 'green' : 'white'; ctx.fillRect(x, y, width, height); ctx.strokeStyle = 'black'; ctx.strokeRect(x, y, width, height); if (guessedLetters.includes(letter)) { ctx.fillStyle = 'black'; ctx.fillText(letter, x + (width / 2), y + (height / 2)); }

This approach iterates through each letter in a puzzle, calculating the starting coordinates, drawing a rectangle for the current cell based on the index and other details — such as the number of columns in a row and the width and height of each cell. It checks the character and colors the cell accordingly, assuming # is used to denote an empty cell and a letter denotes a blank. Guessed letters are then drawn on the cell to reveal them.

A potential game board rendered using the above code

Another approach would be to prepare a static image of the board for each puzzle beforehand, which would be drawn to the canvas. This approach can add a fair amount of effort to puzzle preparation, as you’ll need to create additional images, possibly determine the position of each letter to draw on the custom board, and encode all of that information into a data structure suitable for rendering. The trade-off would be better-looking graphics and perhaps better letter positioning.

This is what a puzzle might look like following this second approach:

let puzzle = { background: 'img/puzzle-01.png', letters: [ {chr: 'H', x: 45, y: 60}, {chr: 'E', x: 75, y: 60}, {chr: 'L', x: 105, y: 60}, {chr: 'L', x: 135, y: 60}, {chr: 'O', x: 165, y: 60}, {chr: 'W', x: 45, y: 100}, {chr: 'O', x: 75, y: 100}, {chr: 'R', x: 105, y: 100}, {chr: 'L', x: 135, y: 100}, {chr: 'D', x: 165, y: 100} ]

For the sake of efficiency, I’d recommend including another array to track matching letters. With only the guessedLetters array available, you’d need to scan the puzzle’s letters repeatedly for multiple matches. Instead, you can set up an array to track the solved letters and just copy the matching definitions to it when the player makes their guess, like so:

const solvedLetters = []; puzzle.letters.forEach((letter) => { if (letter.chr === evt.key) { solvedLetters.push(letter); }

Rendering this puzzle then looks like this:

const imgPuzzle = new Image();
imgPuzzle.onload = function () { ctx.drawImage(this, 0, 0);
imgPuzzle.src = puzzle.background; solvedLetters.forEach((letter) => { ctx.fillText(letter.chr, letter.x, letter.y);

A potential game board rendered using the alternative approach

For the record, I took the second approach when writing my game. But the important takeaway here is that there are often multiple solutions to the same problem. Each solution comes with its own pros and cons, and deciding on a particular solution will inevitably affect the design of your program.

Spinning the Wheel

At first blush, spinning the wheel appeared to be challenging: render a circle of colored segments with prize amounts, animate it spinning, and stop the animation on a random prize amount. But a little bit of creative thinking made this the easiest task in the entire project.

Regardless of your approach to encoding puzzles and rendering the game board, the wheel is probably something you’ll want to use a graphic for. It’s much easier to rotate an image than draw (and animate) a segmented circle with text; using an image does away with most of the complexity up front. Then, spinning the wheel becomes a matter of calculating a random number greater than 360 and repeatedly rotating the image that many degrees:

const maxPos = 360 + Math.floor(Math.random() * 360);
for (let i = 1; i < maxPos; i++) { setTimeout(() => {; ctx.translate(640, 640); ctx.rotate(i * 0.01745); ctx.translate(-640, -640); ctx.drawImage(imgWheel, 0, 0); ctx.restore(); }, i * 10);

I created a crude animation effect by using setTimeout to schedule rotations, with each rotation scheduled further and further into the future. In the code above, the first 1 degree rotation is scheduled to be rendered after 10 milliseconds, the second is rendered after 20 milliseconds, etc. The net effect is a rotating wheel at approximately one rotation every 360 milliseconds. And ensuring the initial random number is greater than 360 guarantees I animate at least one full rotation.

A brief note worth mentioning is that you should feel free to play around with the “magic values” provided to set/reset the center point around which the canvas is rotated. Depending on the size of your image, and whether you want the the entire image or just the top portion of the wheel to be visible, the exact midpoint may not produce what you have in mind. It’s okay to tweak the values until you achieve a satisfactory result. The same goes for the timeout multiplier, which you can modify to change the animation speed of the rotation.

Going Bankrupt

I think we all experience a bit of schadenfreude when a player’s spin lands on Bankrupt. It’s fun to watch a greedy contestant spin the wheel to rack up a few more letters when it’s obvious they already know the puzzle’s solution — only to lose it all. And there’s the fun bankruptcy sound effect, too! No game of Wheel of Fortune would be complete without it.

For this, I used the Audio object, which gives us the ability to play sounds in JavaScript:

function playSound(sfx) { sfx.currentTime = 0;;
} const sfxBankrupt = new Audio('sfx/bankrupt.mp3'); playSound(sfxBankrupt);

But what triggers the sound effect?

One solution would be to press a button to trigger the effect, since I’d be controlling the gameplay already, but it was more desirable for the game to automatically play the sound. Since Bankrupt wedges are the only black wedges on the wheel, it’s possible to know whether the wheel stops on Bankrupt simply by looking at the pixel color:

const maxPos = 360 + Math.floor(Math.random() * 360);
for (let i = 1; i < maxPos; i++) { setTimeout(() => {; ctx.translate(640, 640); ctx.rotate(i * 0.01745); ctx.translate(-640, -640); ctx.drawImage(imgWheel, 0, 0); ctx.restore(); if (i === maxPos - 1) { const color = ctx.getImageData(640, 12, 1, 1).data; if (color[0] === 0 && color[1] === 0 && color[2] === 0) { playSound(sfxBankrupt); } } }, i * 10);

I only focused on bankruptcies in my code, but this approach could be expanded to determine prize amounts as well. Although multiple amounts share the same wedge color — for example $600, $700, and $800 all appear on red wedges — you could use slightly different shades to differentiate the amounts: rgb(255, 50, 50), rgb(255, 51, 50), and rgb(255, 50, 51) are indistinguishable to human eyes but are easily identified by the application. In hindsight, this is something I should have pursued further. I found it mentally taxing to manually keep score while pressing keys and running the game, and the extra effort to automate score keeping would definitely have been worth it.

The differences between these shades of red are indistinguishable to the human eye


If you’re curious, you can find my code on GitHub. It isn’t the epitome and best practices, and there’s lots of bugs (just like a lot of real-world code running in production environments!) but it served its purpose. But ultimately the goal of this article was to inspire you and invite you to think critically about your own trade-off choices.

If you were building a similar game, what trade-offs would you make? What features would you deem critical? Perhaps you’d want proper animations, score keeping, or perhaps you’d even use web sockets so contestants could play together in their own browsers rather than via screensharing the emcee’s screen.

Looking beyond this particular example, what choices are you faced with in your daily work? How do you balance business priorities, proper coding practices, and tech debt? When does the desire to make things perfect become an obstacle to shipping a product? Let me know on Twitter.

Timothy Boronczyk

Timothy Boronczyk is a native of Syracuse, New York, where he lives with no wife and no cats. He has a degree in Software Application Programming, is a Zend Certified Engineer, and a Certified Scrum Master. By day, Timothy works as a developer at ShoreGroup, Inc. By night, he freelances as a writer and editor. Timothy enjoys spending what little spare time he has left visiting friends, dabbling with Esperanto, and sleeping with his feet off the end of his bed.

New books out now!

Get practical advice to start your career in programming!

Master complex transitions, transformations and animations in CSS!



5 Web Design Trends for 2021

5 Web Design Trends for 2021 – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

This sponsored article was created by our content partner, BAW Media. Thank you for supporting the partners who make SitePoint possible.

For almost everyone, 2020 was a bummer. Many businesses were forced to take creative measures just to survive. Consumers had to make adjustments as well, and even the Web has had to make some changes. 

Some of “yesterday’s” design trends had to make way for new ones too.

What, then, can we expect to see in the way of new design trends in 2021?

We’ll show you some examples of websites that have incorporated some of these new trends, along with a selection of BeTheme pre-built websites that are also putting them to good use.

Whether you’ll be creating sites for new clients or reworking existing sites to align with the newest trends, the following five design approaches should give you some valuable insight.

1. Use Soothing, Reassuring Color Palettes

Recent design trends have favored strong, bold colors. Various gradient schemes have also been both popular and effective. Why? Because these distinctive trends had a strong tendency to capture a visitor’s attention.

2020 gave us more than our fair share of worry, stress, and drama. We eagerly look forward to a return to a time in which we’ll once again feel more comfortable in our surroundings.

We don’t want people shouting at us, or websites shouting at us either, for that matter. Hence, the change to calmer, more toned-down color palettes.

The Bellroy website offers a good example of this toned-down look, with a calming color scheme that fits right in with its line of useful everyday products.

A Bellroy site screenshot

Note that when a brightly-colored product is displayed against a natural color palette it will still stand out, but without getting in your face.
The BeSpa pre-built website, with its soothing color scheme, is another example of sending a message that’s calm and inspiring.

Screenshot of the BeSpa theme

Calm and comfortable doesn’t need to be boring. Far from it. An image like this encourages a visitor to live for the moment, and the safety and security that goes with it.

2. Strive to Creatively Blend Physical Experience with Digital Imagery

For the first time in their lives, many people found themselves stuck at home in 2020, with little to do but look at their screens — which in some cases involved remote work, and in other cases playing digital games.

Some web designers have picked up on this by blending real-world images with illustrations and/or special effects.

A case in point is seen on designer Constance Burke’s website.

A screenshot of the Constance Burke website: six women wearing an assortment of clothes

Instead of showing hand-drawn fashion sketches, or real models wearing real products, her portfolio creatively blends the two.

The BeSki pre-built site also blends the digital with the physical, but in a vastly different way.

The home page starts with a physical image of a skier. Notice how the snow in the hero section blends in with the next section, a section comprised of digital imagery. That section then blends into another real image, which blends back into a digital design.

2020 saw more people shopping online — often out of necessity. This created a situation that encouraged website designers to provide visitors with more efficient pathways to conversion.

Since many of these visitors were newly acquainted with online shopping, it was important that their experience would be as effortless as possible — that they could get in and out as quickly as they normally would in a brick-and-mortar store.

This can be accomplished by more concise product descriptions, improved product search capabilities, add-to-cart shortcuts, and the like.

Walgreens’ product page design sets a good example for 2021 eCommerce website design.

Walgreens site screenshot

The product’s applicable details as well as picking and shipping options, customer ratings, and discounts or special offers are clearly presented and are all above the fold. Customers can either scroll for other relevant information or take their next step.

BePestControl’s pre-built site takes a similar approach.

The BePestControl theme

Pertinent information is kept short and sweet. The customer can either add the item to the cart or read the additional information beneath the button.

Well-designed navigation aids and product description options combine to make the shopping experience a gratifying one.

4. Place Greater Emphasis on User-controlled Video Content

Once upon a time, video was ”the thing” on websites. While no longer new, video remains an effective method of providing highly useful content, but its popularity has taken a hit.

The reason? A lack of user control on too many websites. Videos are fine, but only when visitors feel a need to view them.

Thanks in part to Zoom connecting friends in 2020, more people have become accustomed to what video can offer. And if they’re given a choice as to what to view and what not to view, you can expect videos to make a comeback in 2021 — sans autoplay or embedded versions.

See how Payoneer has incorporated a Watch Video button in their design.

A screenshot of the Payoneer website

It’s not big and bold, but you can’t miss the way the white button stands out against the darker background. As you might expect, a visitor will appreciate having the option of watching or not having to watch the testimonial.

The BeOptics pre-built website takes a similar approach.

Screenshot of the BeOptics theme

In this case, the Play button acts as a gateway to additional site or product information. The way the button transforms upon hover makes visitors aware that they have an opportunity to learn more by watching the video.

5. Spend More Time Displaying Trust Builders

Trust builders are indispensable website elements. Brick-and-mortar store shoppers may take several minutes to size up a business and decide if they want to patronize it. Online shoppers will do the same in far less time.

Web designers can choose from a variety of trust-building approaches. For example:

  • employing charts, statistic callouts, counters, or other data visualization methods
  • using logos to help to solidify the brand
  • providing client testimonials, customer reviews, or user ratings
  • case studies
  • portfolios
  • security seals — such as Better Business Bureau (BBB) or TRUSTe
  • secure checkout and payment — such as PayPal checkout
  • proof of community service or social good

Consider which of the above would best convince visitors to become valued customers. Or, choose all of the above (although probably not for a one-page site).

Omaze has taken the approach of giving its visitors opportunities to win prizes when they’ve made donations. This website goes a step further by highlighting the good things its donors have enabled it to accomplish.

A screenshot of the Omaze website

It has also set aside a space for highlighting reputable publications that have featured Omaze. This serves to bring added legitimacy to the organization.

Screenshot of sites featuring Omaze listed on the Omaze website

It also uses data visualization and testimonials to provide an element of trust-building transparency as to how donations are processed and put to use.

Data visualization on the Omaze website

No matter the size of the organization or enterprise you’re designing a site for, there’s always room for one or more impressive trust builders.

BePortfolio shows how this could apply to a portfolio site.  

The home page has dedicated a lot of space to several of the trust builders cited earlier: 

  • counts of satisfied customers
  • client testimonials
  • case studies
  • samples from the portfolio
  • an impressive display of client logos

It’s simply a matter of giving people more than enough reasons to trust your brand.

Have You Taken Up these New Web Design Trends Yet?

Out with the old and in with the new? Not exactly. Some trends may never go out of fashion — minimalism and bold headline typography to name a couple. But 2020 changed the way we look at some things, and as a result, some design trends need to be discarded and replaced with others to adjust to the new normal.

Whether you want to update and upgrade existing sites or implement these new trends in your new website designs, BeTheme’s 600+ pre-built websites will steer you in the right direction.



Static Site Generators: A Beginner’s Guide

Static Site Generators: A Beginner’s Guide – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

The Jamstack (JavaScript, APIs, and Markup) is increasingly becoming the development stack of choice on the Web. The title on the Jamstack website suggests that the Jamstack is “the modern way to build websites and apps” and that it “delivers better performance”.

Performance is certainly one of the benefits the Jamstack brings to the table, together with better security, scalability, and developer experience. Sites built on this type of architecture make use of pre-rendered static pages served over CDNs, can get data from multiple sources, and replace traditional servers and their databases with micro service APIs.

What makes possible the creation of static sites quickly and relatively painlessly are static site generators (SSGs).

There are tons of static site generators in a range of programming languages, such as JavaScript, Ruby, Go, and more. You’ll find an exhaustive, unfiltered list on, but if you’d like something more manageable, check out the Jamstack website”s list, where you can filter static site generators by name or by the number of GitHub stars.

In this article, I’m going to list seven popular static site generators and their main features, so that you can form a better idea of which one among them would be a good fit for your project.

What Are Static Site Generators?

A common CMS (content management system) — like WordPress, for instance — builds the web page dynamically as it’s being requested by the client: it assembles all the data from the database, and processes the content through a template engine. In contrast, while static site generators also process pages through a template engine, they handle the build process before the pages are requested by the client, meaning that they’re ready to serve when requested. All that’s hosted online is static assets, which makes sites much more lightweight and faster to serve.

To learn more about the differences between a traditional CMS and a static site generator, and about the benefits of using an SSG, check out Craig Buckler’s article on “7 Reasons to Use a Static Site Generator”.

But, what about all the good things that come with CMSs, like content creation and update by non developers, team collaboration around content, and so on? Enter the headless CMS.

A headless CMS is one that has only has a back end. There’s no front end to display the content. Its only job is to manage content, but it also provides an API that another front end can use to retrieve the data entered into it.

This way, the editorial team, for example, can continue working from their familiar, user-friendly admin interface and the content they produce or update is just one data source among others that static site generators can access via the exposed API. Popular headless CMS software include Strapi, Sanity, and Contentful. Also, WordPress has a REST API that allows devs to use it as a headless CMS.

So, the modern Jamstack tooling makes it possible to build a statically-served website and still get the benefits of a content management system.

Now, let’s go through some static site generator options.

1. Gatsby


Gatsby is a full-blown framework for building static websites and apps. It’s built in React and leverages GraphQL for manipulating data. If you’re curious and want to delve deeper, check out “Getting Started with Gatsby: Build Your First Static Site” on SitePoint and the docs on the Gatsby website.

Here are some of Gatsby’s strong points:

  • With Gatsby you get to work with the latest web technologies — with React, webpack, modern JS, CSS and so on all ready for you to just start coding your site.
  • Gatsby’s rich plugin ecosystem allows you to use any kind of data you prefer from one or more sources.
  • Easy deployment and scalability, which is mainly due to the fact that Gatsby builds static pages that don’t require complicated setups.
  • Gatsby is a progressive web apps (PWA) generator:

    You get code and data splitting out-of-the-box. Gatsby loads only the critical HTML, CSS, data, and JavaScript so your site loads as fast as possible. Once loaded, Gatsby prefetches resources for other pages so clicking around the site feels incredibly fast. — Gatsby website

  • gatsby-image combines Gatsby’s native image processing capabilities with advanced image loading techniques to easily and completely optimize image loading for your sites.
  • Plenty of starter sites are available for you to grab freely and customize.

2. Next.js


Next is a versatile framework for the creation of server-rendered or statically exported JavaScript apps. It’s built on top of React and is created by Vercel.

To create a Next app, run the following command in your terminal:

npx create-next-app nextjs-blog --use-npm --example ""

cd into nextjs-blog, your newly created directory, and type the command to open your Next JS app’s development server on port 3000:

npm run dev

To check that everything works as expected, open http://localhost:3000 in your browser.

Next.js has great docs, where you can learn more about building and customizing your Next-based apps.

Here are a number of Next’s best features:

  • Next renders on the server by default, which is great for performance. For a discussion of the pros and cons of server-side rendering, check out this article by Alex Grigoryan on Medium.
  • No setup necessary: automatic code-splitting, routing and hot reload out of the box.
  • Image optimization, internationalization, and analytics.
  • Great docs, tutorials, quizzes and examples to get you up and running from beginner to advanced user.
  • Built-in CSS support.
  • Tons of example apps to get you started.

3. Hugo

Hugo - static site generators

Hugo is a very popular static site generator with over 49k stars on GitHub right now. It’s written in Go, and advertises itself as being the fastest framework for building websites. In fact, Hugo comes with a fast build process, which makes building static websites a breeze and works great for blogs with lots of posts.

The docs are great and on the platform’s website you’ll find a fantastic quickstart guide that gets you up and running with the software in no time.

Here are some of Hugo’s most loved features:

  • Designed and optimized for speed: as a rule of thumb, each piece of content renders in about one millisecond.
  • No need to install extra plugins for things like pagination, redirection, multiple content types, and more.
  • Rich theming system.
  • Shortcodes available as an alternative to using Markdown.
  • Since December 2020, Hugo offers Dart Sass support, and a new filter to overlay an image on top of another — Hugo 0.80: Last Release of 2020!

4. Nuxt.js


Nuxt.js is a higher-level framework built with Vue.js that lets you create production-ready web apps. With Nuxt, you can have:

  • Server-side rendering for your website, also called universal or isomorphic mode. Nuxt uses a Node server to deliver HTML based on Vue components.
  • Static site generation. With Nuxt, you can build static websites based on your Vue application.
  • Single-page apps (SPAs). Nuxt gives you the configuration and the framework to build your Vue-based SPA.

Creating Nuxt-based websites can be done super quickly. Here’s the Hello World example on the Nuxt website. You can download it on your machine or play with it on Codesandbox to get started.

Here are some of Nuxt.js’s features:

  • Great performance: Nuxt-based apps are optimized out of the box.
  • Modular: Nuxt is built using a powerful modular structure. There are more than 50 modules you can choose from to speed up your development experience.
  • Relatively easy learning curve. Nuxt is based on Vue, which is a framework that makes it quick and painless to get started.
  • Integrated state management with Vuex.
  • Automatic Code Splitting.
  • Cutting-edge JavaScript code transpilation.
  • Bundling and minifying of JS and CSS.
  • Managing <head> element (<title>, <meta>, etc.).
  • Pre-processor: Sass, Less, Stylus, etc.

5. Jekyll


Jekyll‘s simplicity and shallow learning curve make it a popular choice with 42k+ stars on GitHub at the time of writing. It’s been around since 2008, so it’s a mature and well supported piece of software.

Jekyll is built with Ruby. You write your content in Markdown, and the templating engine is Liquid. It’s ideal for blogs and other text-heavy websites. Jekyll is also the engine that powers up GitHub Pages, which means that you can host your Jekyll site on GitHub Pages for free, “custom domain name and all”.

Great features Jekyll has to offer include:

  • simplicity
  • free hosting on GitHub Pages
  • great community that takes care of maintenance and the creation of themes, plugins, tutorials and other resources

6. Eleventy

Eleventy JS

Eleventy, often considered as the JavaScript alternative to Jekyll, introduces itself as “a simpler static site generator”. Eleventy is built on native JavaScript, no frameworks (although you can use your favorite one, if you so choose), has a default zero configuration setup approach, and lets you use the templating engine that you prefer.

To quickly get up and running with Eleventy, check out Craig Buckler’s “Getting Started with Eleventy”, Raymond Camden’s “11ty Tutorial: Cranking Your Jamstack Blog Up to 11”, and Tatiana Mac’s “Beginner’s Guide to Eleventy”, or head over to the getting started docs pages on the Eleventy website.

Some nice features include:

  • simplicity and performance
  • community-driven
  • flexible templating system
  • fast build times

7. VuePress


VuePress is a Vue-powered static site generator. Its default theme is optimized for writing technical docs, so it works great for this type of site right out of the box. Its current, stable version at the time of writing is 1.8.0, but if you’re curious about the breaking changes that are in the works, check out version 2 alpha on GitHub.

A VuePress site works as an SPA that leverages the power of Vue, Vue Router and webpack.

To get started with VuePress, you need Node.js v.10+ and optionally Yarn Classic.

For a quick VuePress setup, use the create-vuepress-site generator by opening your terminal in your directory of choice and running either of the following commands, depending on whether you’re using Npm or Yarn:


npx create-vuepress-site [optionalDirectoryName]


yarn create vuepress-site [optionalDirectoryName]

After you’ve answered a few configuration questions, you should see the new website file structure in your chosen folder.

Head over to the VuePress Guide for more details.

Here are some great features that VuePress has to offer:

  • Setting up your VuePress-based site is quick and you can write your content using Markdown.
  • VuePress is built on Vue, which means that you can enjoy the web experience of Vue, webpack, the possibility of using Vue components inside Markdown files and of developing custom themes with Vue.
  • Fast loading experience: VuePress static sites are made of pre-rendered static HTML and run as an SPA once they’re loaded in the browser.
  • Multilanguage support by default with i18n.

Nuxt.js or VuePress?

Both Nuxt.js and VuePress are built on top of Vue.js and let you create static websites. So, which one is to be preferred over the other?

Let’s say that Nuxt.js can do everything VuePress does. However, in essence, Nuxt is best suited for building applications. VuePress, on the other hand, is ideal for creating static documentation websites that display content written in Markdown.

In short, if all you need is a documentation site or a very simple blog in Vue.js, consider reaching out for VuePress, as Nuxt would be overkill.

How to Choose a Static Site Generator

With all the options available, it’s easy to feel paralyzed when it comes to choosing a static site generator that fits the bill. There are some considerations that could help you sieve through what’s on offer.

Your project’s requirements should throw some light on the features you should be looking for in your SSG.

If your project needs lots of dynamic capabilities out of the box, then Hugo and Gatsby could be a good choice. As for build and deploy time, all of the SSGs listed above perform very well, although Hugo seems to be the favorite, especially if your website has a lot of content.

Is your project a blog or a personal website? In this case Jekyll and Eleventy could be excellent choices, while for a simple documentation website VuePress would be a great fit. If you’re planning an ecommerce website, you might want to consider which SSG fits in well with a headless CMS for store management. In this case, Gatsby and Nuxt could work pretty well.

One more thing you might want to consider is your familiarity with each of the SSG languages. If you program in Go, then Hugo is most likely your preferred choice. On the other hand, if JavaScript is your favorite programming language, you’re spoilt for choice: Eleventy is built in pure JS, Next and Gatsby are built on top of React, while Nuxt and VuePress are built in Vue.

With regard to stuff like great documentation, strong community and support, all of the static site generators I listed figure among the most popular.



Creating Directionally Lit 3D Buttons with CSS

Creating Directionally Lit 3D Buttons with CSS – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

I’m not too sure how I stumbled into this one. But something led me to this tweet:

And, to me, that’s a challenge.

The button design is neat. But I didn’t want to do a direct copy. Instead, we decided to make a “Twitter” button. The idea is that we create an almost transparent button with a social icon on it. And then that social icon casts a shadow below. Moving our mouse across the button shines a light over it. Pressing the button pushes it onto the surface. Here’s the final result:

In this article, we’re going to look at how you can make it too. The cool thing is, you can swap the icon out to whatever you want.

The Markup

My first-take approach for creating something like this is to scaffold the markup. Upon first inspection, we’ll need to duplicate the social icon used. And a neat way to do this is to use Pug and leverage mixins:

mixin icon() svg.button__icon(role='img' xmlns='' viewbox='0 0 24 24') title Twitter icon path(d='M23.953 4.57a10 10 0 01-2.825.775 4.958 4.958 0 002.163-2.723c-.951.555-2.005.959-3.127 1.184a4.92 4.92 0 00-8.384 4.482C7.69 8.095 4.067 6.13 1.64 3.162a4.822 4.822 0 00-.666 2.475c0 1.71.87 3.213 2.188 4.096a4.904 4.904 0 01-2.228-.616v.06a4.923 4.923 0 003.946 4.827 4.996 4.996 0 01-2.212.085 4.936 4.936 0 004.604 3.417 9.867 9.867 0 01-6.102 2.105c-.39 0-.779-.023-1.17-.067a13.995 13.995 0 007.557 2.209c9.053 0 13.998-7.496 13.998-13.985 0-.21 0-.42-.015-.63A9.935 9.935 0 0024 4.59z')

Here, we’ve created a mixin for rendering an SVG of the Twitter icon. This would render the Twitter icon if we invoke it like so:


Doing that will give us a big Twitter icon.

See the Pen
1. Render An Icon
by SitePoint (@SitePoint)
on CodePen.

Because social icon sets tend to use the same “0 0 24 24” viewBox, we could make the title and path arguments:

mixin icon(title, path) svg.button__icon(role='img' xmlns='' viewbox='0 0 24 24') title= title path(d=path)

Then our Twitter icon becomes

+icon('Twitter Icon', 'M23.953 4.57a10 10 0 01-2.825.775 4.958 4.958 0 002.163-2.723c-.951.555-2.005.959-3.127 1.184a4.92 4.92 0 00-8.384 4.482C7.69 8.095 4.067 6.13 1.64 3.162a4.822 4.822 0 00-.666 2.475c0 1.71.87 3.213 2.188 4.096a4.904 4.904 0 01-2.228-.616v.06a4.923 4.923 0 003.946 4.827 4.996 4.996 0 01-2.212.085 4.936 4.936 0 004.604 3.417 9.867 9.867 0 01-6.102 2.105c-.39 0-.779-.023-1.17-.067a13.995 13.995 0 007.557 2.209c9.053 0 13.998-7.496 13.998-13.985 0-.21 0-.42-.015-.63A9.935 9.935 0 0024 4.59z')

But, we could pass it a key — and then have the paths stored in an object if we have many icons we wanted to use or repeat:

mixin icon(key) - const PATH_MAP = { Twitter: "M23.953 4.57a10 10 0 01-2.825.775 4.958 4.958 0 002.163-2.723c-.951.555-2.005.959-3.127 1.184a4.92 4.92 0 00-8.384 4.482C7.69 8.095 4.067 6.13 1.64 3.162a4.822 4.822 0 00-.666 2.475c0 1.71.87 3.213 2.188 4.096a4.904 4.904 0 01-2.228-.616v.06a4.923 4.923 0 003.946 4.827 4.996 4.996 0 01-2.212.085 4.936 4.936 0 004.604 3.417 9.867 9.867 0 01-6.102 2.105c-.39 0-.779-.023-1.17-.067a13.995 13.995 0 007.557 2.209c9.053 0 13.998-7.496 13.998-13.985 0-.21 0-.42-.015-.63A9.935 9.935 0 0024 4.59z" } svg.button__icon(role='img' xmlns='' viewbox='0 0 24 24') title= `${key} Icon` path(d=PATH_MAP[key]) +icon('Twitter')

This can be a neat way to create an icon mixin to reuse. It’s a little overkill for our example, but worth noting.

Now, we need some markup for our button.

.scene button.button span.button__shadow +icon('Twitter') span.button__content +icon('Twitter') span.button__shine

It’s always good to be mindful of accessibility. We can check what our button gives off by checking the Accessibility panel in your browser’s developer tools.

The accessibility panel in Chrome

It might be a good idea to put a span in for our button text and hide the icons with aria-hidden. We can hide the span text too whilst making it available to screen readers:

.scene button.button span.button__shadow +icon('Twitter') span.button__content span.button__text Twitter +icon('Twitter') span.button__shine

We’ve got different options for applying those aria-hidden attributes. The one we’ll use is changing the mixin code to apply aria-hidden:

mixin icon(key) - const PATH_MAP = { Twitter: "...path code" } svg.button__icon(role='img' aria-hidden="true" xmlns='' viewbox='0 0 24 24') title= `${key} Icon` path(d=PATH_MAP[key])

Another neat way with Pug is to pass through all attributes to a mixin. This is useful in scenarios where we only want to pass some attributes through:

mixin icon(key) - const PATH_MAP = { Twitter: "...path code" } svg.button__icon(role='img' xmlns='' viewbox='0 0 24 24')&attributes(attributes) title= `${key} Icon` path(d=PATH_MAP[key])

If we check the Accessibility panel again, our button only reads “Twitter”. And that’s what we want!

The Styles

Here’s the part you came for — how we style the thing. To start, we’ve dropped this in:

* { transform-style: preserve-3d;

That allows us to create the 3D transforms we need for our button. Try switching that off in the final demo and you’ll see that everything breaks.

Let’s hide the span text from our eyes. We can do this in many ways. One recommended way to hide an element from our eyes, but not those of the screenreader, is to use these styles:

.button__text { position: absolute; width: 1px; height: 1px; padding: 0; margin: -1px; overflow: hidden; clip: rect(0, 0, 0, 0); white-space: nowrap; border-width: 0;

Before we start working on our button, we’re going to tilt the scene. We can do this using a transform. Here we chain the transform to get it into the position we want. I spent a bit of time tinkering with values here on live stream to get it close to the original:

.scene { height: var(--size); position: relative; width: var(--size); transform: rotateX(-40deg) rotateY(18deg) rotateX(90deg);

You’ll notice a size variable there too. We’re going to drive certain things for our button with CSS variables. This will make it handy for tinkering with values and the effect. Usually, we’d put these under the scope they’re required in. But, for demos like this, putting them under the :root at the top of our file makes it easier for us to play with.

:root { --blur: 8px; --shine-blur: calc(var(--blur) * 4); --size: 25vmin; --transition: 0.1s; --depth: 3vmin; --icon-size: 75%; --radius: 24%; --shine: rgba(255,255,255,0.85); --button-bg: rgba(0,0,0,0.025); --shadow-bg: rgba(0,0,0,0.115); --shadow-icon: rgba(0,0,0,0.35); --bg: #e8f4fd;

These are the variables we’re working with, and they’ll make sense as we build up our button.

The Button

Let’s move on to the button! The button element is going to fill the scene element. We could have applied the sizing and transforms directly on the button. But if we were to introduce other buttons and elements, we’d have to transform and size them all. This is something to be mindful of with CSS in general. Try to make your container elements dictate the layout:

.button { appearance: none; background: none; border: 0; cursor: pointer; height: 100%; outline: transparent; position: absolute; width: 100%;

Here we strip the button styles. And that gives us this.

See the Pen
9. Strip Button Styles
by SitePoint (@SitePoint)
on CodePen.

Next, we need to create a common starting point for the button content and the shadow. We can do this by giving each element absolute positioning. The content will have a 3D translate based on the depth variable we defined before:

.button__shadow { border-radius: var(--radius); display: grid; height: 100%; place-items: center; position: absolute; width: 100%;
.button__content { transform: translate3d(0, 0, var(--depth));

Note how we’re also making use of the --radius variable too.

See the Pen
10. Give The Button Depth
by SitePoint (@SitePoint)
on CodePen.

It’s hard to distinguish between the two icons at this stage. And now’s a good time to style them. We can apply some basic icon styling and use a scoped fill for each SVG icon:

.button__content { --fill: var(--icon-fill);
.button__shadow { --fill: var(--shadow-fill);
} .button__icon { height: var(--icon-size); fill: var(--fill); width: var(--icon-size);

It’s getting there! The icons aren’t the same size at the moment, though. We’ll get to that.

See the Pen
11. Apply Scoped Fill
by SitePoint (@SitePoint)
on CodePen.

Let’s get the button press in place. This part is really quick to integrate:

.button__content { transition: transform var(--transition);
.button:active { --depth: 0;

That’s it! Using scoped CSS variables, we’re saying remove the z-axis translation on :active. Adding the transition to the transform stops it from being so instant.

See the Pen
12. Press on :active
by SitePoint (@SitePoint)
on CodePen.

All that’s left to do is style the button layers and the shine. Let’s start with the shadow:

.button__shadow { background: var(--bg-shadow); filter: blur(var(--blur)); transition: filter var(--transition);
.button:active { --blur: 0;

Another scoped style here. We’re saying that when we press the button, the shadow is no longer blurred. And to blur the shadow, we use the CSS filter property with a blur filter — the value of which we defined in our CSS variables. Have a play with the --blur variable and see what happens.

See the Pen
13. Reduce Blur on Hover
by SitePoint (@SitePoint)
on CodePen.

For the content layer, we’re going to use a background color and then apply a backdrop filter. Much like filter, backdrop-filter is a way for us to apply visual effects to elements. A common use case currently is to use blur for “Glassmorphism”:

.button__content { backdrop-filter: blur(calc(var(--blur) * 0.25)); background: var(--button-bg); overflow: hidden; transition: transform var(--transition), backdrop-filter var(--transition);

We use the value of --blur and apply a transition for backdrop-filter. Because of the way we scoped our --blur variable on :active, we get the transition almost for free. Why the overflow: hidden? We’re anticipating that shine element that will move around the button. We don’t want it wandering off outside, though.

See the Pen
14. Styling Content Layer
by SitePoint (@SitePoint)
on CodePen.

And now, the last piece of the puzzle— that light shine. This is what’s been causing the icons to be a different size. Because it has no styles, it’s affecting the layout. Let’s give it some styles:

.button__shine { --shine-size: calc(var(--size) * 0.5); background: var(--shine); border-radius: 50%; height: var(--shine-size); filter: blur(var(--shine-blur)) brightness(1.25); position: absolute; transform: translate3d(0, 0, 1vmin); width: var(--shine-size);

That absolute positioning will sort out the icon sizing. Applying a border radius will make the spotlight round. And we use filter again to give the blurry spot light effect. You’ll notice we’ve chained a brightness filter on the end there to brighten things up a bit after they’re blurred.

See the Pen
15. Styling Shine
by SitePoint (@SitePoint)
on CodePen.

Using the 3D translation ensures that the shine sits above the button, which it would do. This way, there’s no chance of it getting cut by z-fighting with other elements.

That’s all we need for the styles for now. Now it’s time for some scripts.


We’re going to use GreenSock here for convenience. They have some neat utilities for what we want. But, we could achieve the same result with vanilla JavaScript. Because we’re using scripts with type “module”, we can take advantage of SkyPack.

import gsap from ''

And now we’re ready to start tinkering. We want our button to respond to pointer movement. The first thing we want is to translate the shine as if it follows our pointer. The second is to shift the button depending on where our pointer is.

Let’s grab the elements we need and set up some basic event listeners on the document:

import gsap from '' const BUTTON = document.querySelector('.button')
const CONTENT = document.querySelector('.button__content')
const SHINE = document.querySelector('.button__shine') const UPDATE = ({x, y}) =>{x, y}) document.addEventListener('pointermove', UPDATE)
document.addEventListener('pointerdown', UPDATE)

Try moving your pointer around in this demo to see the valuables we get returned for x and y:

See the Pen
16. Grabbing Elements and Creating Event Listeners
by SitePoint (@SitePoint)
on CodePen.

This is the trickiest bit. We need some math to work out the shine position. We’re going to translate the shine after its initial reset. We need to first update the shine styles to accommodate this. We’re using the scoped CSS variables --x and --y. We give them a fallback of -150 so they’ll be out of shot when the demo loads:

.button__shine { top: 0; left: 0; transform: translate3d(-50%, -50%, 1vmin) translate(calc(var(--x, -150) * 1%), calc(var(--y, -150) * 1%));

Then, in our update function we calculate the new position for the shine. We’re basing this on a percentage of the button size. We can calculate this by subtracting the button position from our pointer position. Then we divide that by the position. To finish, multiply by 200 to get a percentage:

const BOUNDS = CONTENT.getBoundingClientRect()
const POS_X = ((x - BOUNDS.x) / BOUNDS.width) * 200
const POS_Y = ((y - BOUNDS.y) / BOUNDS.height) * 200

For example, POS_X:

  1. Grab pointer position x.
  2. Subtract button position x.
  3. Divide by button width.
  4. Multiply by 200.

We multiply by 200 because the shine is half the size of the button. This particular part is tricky because we’re trying to track the pointer and map it into 3D space.

To apply that to the button, we can set those CSS variables using gsap.set. That’s a GSAP method that works as a zero second tween. It’s particularly useful for setting values on elements:

gsap.set(SHINE, { '--x': POS_X, '--y': POS_Y

But, if we want to take it one step further, we can use a quickSetter from GSAP, which would be better for performance in real-world scenarios where we’re making lots of updates:

const xySet = gsap.quickSetter(SHINE, 'css') xySet({ '--x': POS_X, '--y': POS_Y

That makes our update function look something like this:

const UPDATE = ({x, y}) => { const BOUNDS = CONTENT.getBoundingClientRect() const POS_X = ((x - BOUNDS.x) / BOUNDS.width) * 200 const POS_Y = ((y - BOUNDS.y) / BOUNDS.height) * 200 xySet({ '--x': POS_X, '--y': POS_Y })

The accuracy of following the pointer would need more calculations to be precise. Have a play with this demo where the overflow on the button is visible and the shine is more prominent. You can see how the shine element loses its tracking.

See the Pen
17. Translating the Shine Playground
by SitePoint (@SitePoint)
on CodePen.

This demo puts everything where it should be.

See the Pen
18. Translating the Shine
by SitePoint (@SitePoint)
on CodePen.

Last feature. Let’s shift the button for an added touch. Here, we’re going to base the shift of the button on pointer position. But, we’re going to limit its movement. To do this, we can use another GSAP utility. We’re going to use mapRange. This allows us to map one set of values to another. We can then pass a value in and get a mapped value back out.

First, we’ll define a limit for movement. This will be a percentage of the button size:

const LIMIT = 10

Now, in our update function we can calculate the percentage of shift. We do this by mapping the window width against the limit. And we input our pointer position to get the mapped percentage back:

const xPercent = gsap.utils.mapRange( 0, window.innerWidth, -LIMIT, LIMIT, x

In this block we’re mapping the range of 0 to window.innerWidth against -10 to 10. Passing pointer position x will give us a value between -10 and 10. And then we can apply that percentage shift to our button. We do the same for vertical shift and this gives us an update function like the following:

const buttonSet = gsap.quickSetter(BUTTON, 'css')
const xySet = gsap.quickSetter(SHINE, 'css')
const LIMIT = 10 const UPDATE = ({x, y}) => { const BOUNDS = CONTENT.getBoundingClientRect() const POS_X = ((x - BOUNDS.x) / BOUNDS.width) * 200 const POS_Y = ((y - BOUNDS.y) / BOUNDS.height) * 200 xySet({ '--x': POS_X, '--y': POS_Y }) const xPercent = gsap.utils.mapRange( 0, window.innerWidth, -LIMIT, LIMIT, x ) const yPercent = gsap.utils.mapRange( 0, window.innerHeight, -LIMIT, LIMIT, y ) buttonSet({ xPercent, yPercent, })

That’s it!

That’s how you create a directional lit 3D button with CSS and a little scripting. The cool thing is that we can make changes with relative ease.

For the final demo, I’ve added some extra details and changed the icon. You might recognize it.

See the Pen
20. SitePoint Button
by SitePoint (@SitePoint)
on CodePen.

As always, thanks for reading. Wanna see more? Come find me on Twitter or check out the the live stream!



Learn Snowpack: A High-Performance Frontend Build Tool

Learn Snowpack: A High-Performance Frontend Build Tool – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

In this article, we’ll take a first look at Snowpack — specifically Snowpack 3, which at the time of writing has just been released. Snowpack is a front-end build tool that’s been getting a lot of attention in the community for offering a different approach from tools like webpack, and I’ve been keen to check it out for a while. Let’s dive in!

A History of Build Tools

Before we look into Snowpack, we need to take a quick moment to understand how and why bundlers like webpack came to be. JavaScript’s lack of a module system prior to ES2015’s modules meant that, in the browser, the closest we could get to modules was to split our code up into files that put code into the global scope, as this was how we shared it between files. It was common to see code like this:

window.APP = {} window.APP.Authentication = {...}
window.APP.ApiLoader = {...}

When Node.js arrived and gained popularity, it had a module system in the form of CommonJS:

const Authentication = require('./Authentication.js')
const APILoader = require('./APILoader.js')

Once this became popular as part of Node, people wanted to be able to use it in the browser. That’s when tools started emerging that did this; they could take an application that used CommonJS modules, and bundle it into one large JavaScript file, with all the requires removed, that could be executed in the browser. Browserify was the first such tool that I can remember using to do this, and, to be honest, it felt like magic! This was around the time that webpack came to be, and other tools also supported using CommonJS.

When ES Modules were first introduced (see “Understanding ES6 Modules” for a refresher), people were keen to use them, but there were two problems:

  1. Whilst the spec was done, browsers didn’t support ES Modules.
  2. Even if a browser did support ES Modules, you probably still wanted to bundle in production, because it takes time to load in all the modules if they’re defined as separate files.

Webpack (and others) updated to support ES Modules, but they would always bundle your code into one file, both for developing and for production. This meant that a typical workflow is:

  1. Edit a file in your application.
  2. Webpack looks at which file changed, and rebundles your application.
  3. You can refresh the browser and see your change. Often, this is done for you by a webpack plugin such as hot module reloading.

The problem here lies in step two as your application grows in size. The work for webpack to spot a file change and then figure out which parts of your application to rebundle into the main bundle can take time, and on large applications that can cause a serious slowdown. That’s where Snowpack comes in …

Snowpack’s Approach

Snowpack’s key selling point for me is this line from their documentation:

Snowpack serves your application unbundled during development. Each file needs to be built only once and then is cached forever. When a file changes, Snowpack rebuilds that single file.

Snowpack takes full advantage of ES Modules being supported across all major browsers and doesn’t bundle your application in development, but instead serves up each module as a single file, letting the browser import your application via ES Modules. See “Using ES Modules in the Browser today” for more detail on browsers and their support for unbundled ES Modules.

It’s important to note at this point that you must use ES Modules to use Snowpack. You can’t use CommonJS in your application.

This however raises a question: what if you install a dependency from npm that does use CommonJS? Although I hope one day that the majority of npm packages are shipped as ES Modules, we’re still a fair way off that, and the reality is even if you build an application exclusively in ES Modules, it’s highly likely at some point you’ll need a dependency that’s authored in CommonJS.

Luckily, Snowpack can deal with that too! When it sees a dependency (let’s say, React), in your node_modules folder, it can bundle just that dependency into its own mini-bundle, which can then be imported using ES Modules.

Hopefully you can see why Snowpack caught my eye. Let’s get it up and running and see how it feels to use on an application.

Getting Started

To start with, I create a new empty project folder and run npm init -y to get me up and running. This creates a basic package.json which I can go in and edit later if I want to. You can also run npm init without the -y, which will make npm prompt you to answer questions to fill in the details in your package.json. I like using -y to quickly get up and running; I can edit the package.json later.

I then install Snowpack as a developer dependency:

npm install --save-dev snowpack

And now I add two scripts into my package.json:

"scripts": { "start": "snowpack dev", "build": "snowpack build"

This sets us up two npm run commands:

  • npm run start will run Snowpack in development mode.
  • npm run build will run a production build of Snowpack, which we’ll talk more about later.

When we run our application, Snowpack fires up a little development server that will run our application locally. It will look for an index.html file, so let’s create one of those and also create app.js, which for now will just log hello world to the console:

<!DOCTYPE html>
<html lang="en">
<head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Snowpack testing</title>
<body> <script src="./app.js"></script>
console.log('hello world')

Now we can run npm run start (or npm start for short — start is one of the npm lifecycle methods where you don’t need to prefix it with run).

You should see your terminal output look something like this:

snowpack http://localhost:8080 • Server started in 80ms. ▼ Console [snowpack] Hint: run "snowpack init" to create a project config file. Using defaults...
[snowpack] Nothing to install.

The first part of the output tells us that Snowpack is running on localhost:8080. The next line prompts us to create a Snowpack configuration file, which we’ll do shortly, but it’s the last line that I want to highlight:

[snowpack] Nothing to install.

This is Snowpack telling us that it’s checked for any npm modules that need dealing with, and it hasn’t found any. In a moment, we’ll add an npm package and take a look at how Snowpack deals with it.

Generating a Configuration File

You can run npx snowpack init to generate the configuration file as the command line output suggests. We won’t be needing to change Snowpack’s behavior until we come to bundling for production, but if you do you can create this file and configure a wide range of options to get Snowpack running just how you want it to.

Writing in ES Modules

Let’s create another JavaScript file to see how Snowpack deals with multiple files. I created api.js, which exports a function that takes a username and fetches some of their public repositories from GitHub:

export function fetchRepositories(user) { return fetch(`${user}/repos`) .then(response=> response.json());

Then, in app.js, we can import and use this function. Feel free to replace my GitHub username with your own!

import {fetchRepositories} from './api.js';
fetchRepositories('jackfranklin').then(data => console.log(data));

Save this file, and run Snowpack again if you didn’t leave it running previously. In the browser console, you’ll see an error:

Uncaught SyntaxError: Cannot use import statement outside a module

This is because of our <script> tag in our HTML file:

<script src="./app.js"></script>

Because ES Modules behave slightly differently from code that doesn’t use ES Modules, it’s not possible for browsers to just start supporting ES Modules in all scripts. Doing so would almost certainly break some existing websites, and one of the main goals of JavaScript is that any new features are backwards compatible. Otherwise, every new JS feature might break thousands of existing websites!

In order to use ES Modules, all we need to do is tell the browser that by giving the script tag a type of module:

<script type="module" src="./app.js"></script>

And when you save that, your browser should refresh automatically (another nice thing Snowpack does out of the box) and you’ll see a list of GitHub repositories logged to the console.

Installing npm Dependencies

Let’s see how Snowpack deals with installing a package from npm. I’m going to get our list of repositories rendered onto the screen with Preact. Firstly, let’s install it:

npm install --save preact

To check it’s working, I’ll update app.js to render Hello world onto the screen:

import {fetchRepositories} from './api.js';
import {h, render} from 'preact'; fetchRepositories('jackfranklin').then(data => { render(h('p', null, 'Hello world'), document.body);

Note that I’m using the h helper to create HTML, rather than use JSX. I’m doing this for speed purposes, to get an example up and running. We’ll swap to JSX a bit later in this article and see how Snowpack handles it, so hold tight.

Now when we run npm start, Snowpack will output this:

[snowpack] ! building dependencies...
[snowpack] ✔ dependencies ready! [0.33s]

You can see that it found Preact, and created an ES Modules bundle ready for us to use. If you look in the Network tab of the developer tools, you’ll see a request to app.js, api.js and preact.js, which is the file Snowpack created for us from the Preact dependency. What’s nice about Snowpack’s approach is that now it’s created that Preact file, it will cache it and only ever change it if Preact changes. Given that Preact is a dependency, we’re probably not going to be changing it regularly, so it shouldn’t have to do that work often. This is one of the ways Snowpack keeps development nice and snappy.

The network tab in ChromeDevTools

Supporting JSX

Snowpack has good support for a number of syntaxes and filetypes out of the box. It does support JSX, but with one condition: all JSX must be defined in .jsx files. You can change this, if you want (check the documentation for details), but I’ve always liked using .jsx. Let’s create a new JSX file that contains our Preact component, repo-list.jsx:

import {h} from 'preact'; export function RepoList(props) { return <ul>{ => { return <li><p>{}</p></li> })}</ul>

Note that, despite the fact that we don’t call the h helper directly, we need to import it so that Snowpack doesn’t assume we’re using React.

Now in app.js we can render our component:

import {h, render} from 'preact';
import {fetchRepositories} from './api.js';
import {RepoList} from './repo-list.jsx'; fetchRepositories('jackfranklin').then(data => { render(h(RepoList, { repos: data }, null), document.body);

And we have our list of repositories on the screen.

Production Builds

At the time of writing, running a Snowpack production build won’t bundle and minify all your files together into one bundle as you might expect. It’s explained further in the Snowpack production build guide, but Snowpack’s speciality is to be an ES Modules multi-file build tool, not a complete bundler. At the time of writing, Snowpack is working on providing built-in bundling via esbuild, but the docs state that this is still very experimental and shouldn’t be relied on for large projects.

Instead, the use of another bundler that Snowpack provides plugins for is recommended:

Note that you don’t have to manually install the other bundler. These are Snowpack plugins which you can configure in your Snowpack configuration file. Snowpack will then take care of calling webpack/Rollup for you to bundle your application when you run snowpack build.

Bundling with Webpack

We’ll look shortly at Snowpack’s built-in esbuild bundler support, but for now using one of these plugins is a straightforward solution and also the recommended approach. Let’s get Snowpack’s webpack plugin set up to minify our code when we build for production. First, we’ll install it:

npm install --save-dev @snowpack/plugin-webpack

You’ll also need a configuration file, so run npx snowpack init (if you haven’t already) to generate a configuration file where we can configure the webpack plugin for production builds.

In snowpack.config.js, make the plugins item look like so:

plugins: [ ['@snowpack/plugin-webpack', {}]

The empty object is where you can place any extra configuration settings, though it should work just fine out of the box. Now when we run npm run build, Snowpack will recognize that we’ve added the webpack plugin and bundle accordingly, generating us an optimized, minified bundle that we can ship.

One of the nice things that webpack provides out of the box is dead code elimination — also known in the JavaScript community as “tree shaking” — to avoid code that’s not required making it into our final bundle.

We can see this for ourselves if we export and define a function in api.js which we never use:

export function fetchRepositories(user) { return fetch(`${user}/repos`) .then(response=> response.json());
} export function neverUsed() { console.log('NEVER CALLED')

If we run npm run build once more, and then load our minified output (it will be in the build/js directory and be called app.[hash].js) we can see that if we search the file for 'NEVER CALLED', it’s not been included. Webpack was smart enough to understand that we never called that function, so it could be removed from the final output.

Bundling with esbuild

To get a sense of what the future might look like once Snowpack’s esbuild support is improved and esbuild itself is more production ready (see the esbuild docs for more detail on esbuild and its roadmap), let’s configure that. First remove all the webpack plugin configuration from your snowpack.config.js file and instead add an optimize object:

plugins: [
optimize: { bundle: true, minify: true, target: 'es2018', treeshake: true,

Now when you run npm run build, esbuild will take over and perform the final optimization steps, creating build/app.js, which will be a fully minified version. It also removes dead code just like webpack, so our neverUsed() function has not made it into the final build.

For now, I’d stick with the webpack plugin if you need fully robust, battle-tested bundling, but for side projects or small apps, it might be worth exploring esbuild further.


Snowpack offered me a fantastic developer experience and one that’s left me very keen to try it again on another project. I know in this article we used Preact, but Snowpack supports many other libraries including React, Svelte and many more which you can find documented on the website.

If you haven’t used Snowpack before, I highly recommend giving it a go, and keeping your eye on Snowpack over the coming months and years. I wouldn’t be surprised if it’s a tool that the majority of developers are using in the not-too-distant future.

Here’s a handy Snowpack demo on GitHub, demonstrating how Snowpack functions as a module bundler both in development mode and (with the help of its Webpack plugin) how it can minify your code for production.



10 Web Developer Resumé Tweaks to Get More Interviews

10 Web Developer Resumé Tweaks to Get More Interviews – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

There’s no shortage of remote web development jobs, especially at a time when so many companies have made the shift to remote working. But without a strong resumé, your chances of landing a developer job interview are reduced, or worse, non-existent.

In this article, we cover ten simple tweaks you can make to your resumé in order to improve your chances of being invited to an interview.

Add links in your resumé to your profile on platforms such as GitHub, Stack Overflow, or HackerRank, that prove your coding abilities and experience. Let your public code tell the recruiter/potential employer all about you.

2. Tailoring Your Resumé

Tailor your resumé to each specific job you’re applying for. You don’t have to completely redo the resumé each time, but investing a few minutes each time to add specific terminology used in the job description can boost your chances of getting called for an interview. For instance, Google, Facebook and Netflix have their own criteria, with a focus in the terminology on code optimization, security, or high-availability.

3. Detail Your Skills

Provide detailed descriptions of the key tech skills required by the company. For example:

  • MySQL (stored procedures, caching, logging, replication)
  • CSS (sprites, styled-components, scroll snapping, text animations)

The point here is to highlight both the depth and breadth of your knowledge, and hence your ability to fulfill their needs.

4. Detail Your Impact

Don’t just describe your responsibilities. Instead, list the concrete ways in which you had an impact on every project. Employers look for people who can deliver results, and they want to see examples of how you’ve done this in the past.

Focus on the impact itself. The “what” is so much more impactful than the “how” (although both are important). Back up your statements with numbers and metrics wherever possible.

For example:

Increased test coverage to 60% with RSpec.

Improved monitoring, failure recovery, and observability of all systems by migrating code to a Kubernetes cluster.

Your ability to deliver results is what can persuade your potential employer to invite you for an interview.

5. Avoid Typos!

Check your resumé for the incorrect technology spellings. For example:

  • HTML, not Html
  • JavaScript, not Javascript
  • MongoDB, not Mongo DB
  • GitHub, not Github

Even small typos make your resumé look unprofessional.

6. Keep It Simple

Ensure that your resumé looks clean and unique. Avoid heavy graphics, QR codes, multiple columns and icons, so that any applicant-tracking system can accurately scan your resumé. A 100% free tool for drafting excellent resumés is

7. Use Web-oriented Language

Enrich your descriptions with web-oriented language, such as “fully responsive”, “large-scale”, “cross-browser”, “high-load”, “scalable”, “highly-available”, “serverless”, “robust”, “distributed”, “maintainable”, “multi-threaded”, “modular”, “secure”, etc.

For example:

Led the development of a large-scale web application for video sharing and collaboration.

8. Use Clean, Modern Fonts

Forget about old-fashioned fonts. Instead, use modern fonts like Palanquin, Merriweather, Lato, or Poppins. This will give a fresh look and feel to your resumé.

9. Write Like You Speak

Make sure your resumé sounds like a person wrote it, not a machine. Write as you would speak. Make your descriptions engaging to read by adding interesting, work-related facts about yourself. Avoid buzzwords like “dedicated”, “detail-oriented”, “self-starter”, etc.

10. Don’t Self-rate

Don’t self-rate your tech skill levels, especially using percentages or stars (★★★★). Instead, use the experience section of your resumé to describe what you achieved with the skill. Your potential employer will most likely objectively evaluate your skills during a technical interview or a coding test.


That’s it! Follow these tips and you’ll likely see an increase in the number of interview invitations after applying for jobs!



An Introduction to Wireframing with Figma

An Introduction to Wireframing with Figma – SitePoint

Skip to main content

Free JavaScript Book!

Write powerful, clean and maintainable JavaScript.

RRP $11.95

In this article, we’ll explore what wireframing is, and why it’s worth doing it with Figma — the most-used UI design tool on the market today.

We’ll take a deep dive into Figma, and learn how to design user interfaces with it — digging into wireframing as we go.

BTW: if you take a look at the 2020 Design Tools Survey, Figma won in most categories: User Flows, UI Design, Prototyping, Handoff, Design Systems, Versioning, and even “most excited to try in 2021”.


Wireframes are diagrams that depict the structure of a design, and they can be either low-fidelity (for user research) or mid-fidelity (for UX research). We’ll be focusing on the latter (UX research), and visuals will be of no concern to us here, because all we want to do at this stage is figure out the content and layout (otherwise known as the “information architecture”).

What are we wireframing?

First, a little background on the UI we’ll be building. It will be a table-like structure showing various UX design tools and which step of the UX design workflow each tool is used in. The data will be user-submitted, so the aim is to see which UX design workflow is best, rather the overdone “which UI design tool is best?”

Wireframing will help me to figure out how best to structure this interface without wasting time on figuring out the little visual details. It won’t look amazing, but that’s fine; it just needs to look nice enough that users can offer me some feedback.

Yes, it’s a real UI. At the moment I’m calling it “Toolflows”.

Let’s begin!

Step 1: Set Up the Artboard

The majority of my website’s users are desktop users, so it makes sense to wireframe my design on a desktop artboard. Press A on your keyboard, then click Design > Desktop > MacBook from the right sidebar of Figma.

Creating artboards in Figma

Step 2: Gather Functional Requirements

Assuming that you or somebody else did some user research at some stage, we’ll need to refer to that to create our wireframe. While conducting user research (specifically, user interviews, focus groups, and user testing with low-fidelity wireframes), we would have been made aware of any functional requirements.

Ours are:

  • filter by tool
  • number of workflow users

Let’s start wireframing!

Step 3: Create Text and Shapes

First of all, there are Figma wireframe kits available, but I’m not a fan of them exactly. They make me feel constrained to work only with what’s available in the kit, so it hinders creativity.

Instead, we’ll wireframe using text and shapes.

As we learned before with the artboard, the easiest way to create anything in Figma is to abuse the keyboard shortcuts:

  • T: Text
  • O: Ellipse
  • R: Rectangle
  • ⇧⌘K: Image
  • ⇧L: Arrow
  • L: Line

After that, it’s simply a case of clicking on the artboard roughly where you’d like the object to appear, and then you can use your mouse and arrow keys to adjust the size and alignment.

Useful shortcuts:

  • ⌘-/+ to Zoom.
  • ⌘D to Duplicate selected objects.
  • ⌘G to Group selected objects (⌘rc to select within).
  • Hold when mouse-resizing to rotate objects.
  • Hold when mouse-resizing to maintain aspect ratio.
  • Use arrow keys to move objects by 1px (Hold for 10px).
  • Arrow keys + to resize by 1px (Hold ⌘⇧ for 10px).

Quick wireframing with Figma

Next, we’ll move on to styling.

Step 4: Style, but Don’t Style

Using the (hopefully still visible) Design sidebar, we can alter the styles of objects on our artboard, both aesthetically and to specify the sizing and alignment of them more accurately.

Whether you’re using the Design sidebar or the arrow keys to size/align, hold (option) to measure the distance between objects.

Remember, don’t design much beyond sizing and aligning. Give rounded corners to buttons (so that we can very clearly see that they’re buttons), bolden headings, etc. Clarity, not aesthetics.

Resizing and aligning with Figma

And that’s it; now we have a wireframe. That being said, for designs that require interaction, we’ll want to demonstrate how exactly our design would function, so without further ado let’s move onto prototyping. Prototyping is the step where we make the design interactive (that is, feel as if it were the real thing).

Step 6: Create Transitions

The concept behind this step is to duplicate our artboard, demonstrate how we want our design to look at the end of the interaction, and then define the trigger (animation optional) that will initiate the transition between the two “states”.

Start by duplicating the artboard (⌘D), and then change whatever needs to be changed in this new artboard. In my case, I want to show the dropdown menu, which filters workflows by tool.

Creating transition with Figma

Next, switch to the Prototype sidebar before selecting the object that will be the trigger for your interaction. For me, this is the closed dropdown menu from the original artboard.

Once selected, there should be a draggable circular + icon. Drag this circle onto the second artboard to create a “connection”.

Creating connections with Figma

Step 7: Set Animation (Optional)

Next, we should set some animation for the interaction so that users can more easily see what’s changed between the two states.

Animations don’t need to be fancy, since we’re only designing with mid-fidelity right now, so from the Interaction details dialog that revealed itself after creating the connection, set the Animation to Smart animate. Smart animate animates layers that didn’t exist in the “trigger artboard”. Smart, ay?

If you’re prototyping with mid-fidelity wireframes, you won’t really need to tinker with any of the other options, but it’s cool that you know how to create animated connections now.

Step 8: Test, Test, Test

Next, we’ll want to test it, first to ensure that all of the connections work, then to acquire what I’d consider “low-level feedback” from stakeholders. Obviously the real value comes from testing with users, but that doesn’t mean that our stakeholders don’t have anything to contribute. Maybe we missed something?

To share with stakeholders, hit the Share button in the top-right corner. Stakeholders will be able to leave some comments.

User testing with Figma

For your own testing (which I’d recommend doing before sending sharing with anybody else) you’ll want to download Figma Mirror for iOS or Android. When designing for desktop, though, simply hit the play icon in the top-right to enter “Present” mode.

Bonus Step 9: Conduct Usability Tests

If you’d like to acquire more qualitative feedback on your design (task completion rate, time to completion, etc.), then apps like Maze and Useberry (which both work with Figma) will help you to accomplish exactly that. After all, this is why we’re wireframing, right? To make our design more usable.

Conclusion: Figma Has It All

Figma has it all, including a thriving and dedicated community made up of both macOS and Windows designers, and even those that simply want to design in their web browser (so, Linux too!).

So, what now? Well, you could explore the Figma community, see what they’re making, and maybe even download some Figma plugins to extend and automate your UI design workflow.