0% found this document useful (0 votes)
203 views110 pages

Node Collate

This document provides a list of 30 useful Node.js modules with descriptions and code examples. It introduces Node.js as a JavaScript runtime environment and explains how to install modules using NPM. Some of the key modules highlighted include Express for building web apps, Mongoose for working with MongoDB, and Bcrypt for hashing passwords.

Uploaded by

PERLUES
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
203 views110 pages

Node Collate

This document provides a list of 30 useful Node.js modules with descriptions and code examples. It introduces Node.js as a JavaScript runtime environment and explains how to install modules using NPM. Some of the key modules highlighted include Express for building web apps, Mongoose for working with MongoDB, and Bcrypt for hashing passwords.

Uploaded by

PERLUES
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

30 Node.

js Modules With Usecases


A curated list of top useful Node.js modules that can help you in your upcoming backend
projects.

Top 30 NodeJS Modules With Use-cases


NodeJS is an open-source platform for creating applications that use JavaScript on the server-side
as well as on the client-side. Basically, It allows us to run javascript code outside of the
browser. It is a runtime environment that interprets JavaScript using Google’s V8 JavaScript
engine.

In this article, I am sharing a list of node modules that will help you in your node backend project.
You can install these modules using npm.

Node.js’ package ecosystem (NPM), is the largest ecosystem of open source libraries in the world.

To install any node module run the below command in your terminal:
npm install module_name

Here is a list of node modules that will extend the capability of your node.js application.

1. Express: Express is a fast, un-opinionated, minimalist web framework. It provides small,


robust tooling for HTTP servers, making it a great solution for single-page applications, websites,
hybrids, or public HTTP APIs.
const express = require("express");const app = express();app.get("/", (req, res) => {
res.send("This is home page.");
});app.post("/", (req, res) => {
res.send("This is home page with post request.");
});const PORT = 3000;app.listen(PORT, () => {
console.log(`Server is running on PORT: ${PORT}`);
});

2. Forever: A simple CLI tool for ensuring that a given node script runs continuously (i.e.
forever).
forever start server.js

3. Nodemon: It is a simple monitor script for use during the development of a node.js app, It will
watch the files in the directory in which nodemon was started, and if any files change, nodemon
will automatically restart your node application.
nodemon server.js

4. Helmet: Helmet middleware is a toolkit that helps you to secure your Express apps by setting
various HTTP headers.
const express = require('express');const helmet = require("helmet");const app =
express();app.use(helmet());// or You can use individual
headersapp.use(helmet.contentSecurityPolicy());app.use(helmet.crossOriginEmbedderPolicy());ap
p.use(helmet.crossOriginOpenerPolicy());app.use(helmet.crossOriginResourcePolicy());app.use(h
elmet.dnsPrefetchControl());app.use(helmet.expectCt());app.use(helmet.frameguard()); //
Prevent Click Jacking Attackapp.use(helmet.hidePoweredBy()); // Disable tech-stack from
headerapp.use(helmet.hsts()); // Set strict transport
securityapp.use(helmet.ieNoOpen());app.use(helmet.noSniff());app.use(helmet.originAgentCluste
r());app.use(helmet.permittedCrossDomainPolicies());app.use(helmet.referrerPolicy());app.use(
helmet.xssFilter());// Prevent Cross-site scripting attack
5. Cors: CORS is shorthand for Cross-Origin Resource Sharing. It is a mechanism to allow or
restrict requested resources on a web server depends on where the HTTP request was initiated.
This policy is used to secure a certain web server from access by other websites or domains. For
example, only the allowed domains will be able to access hosted files in a server such as a
stylesheet, image, or script.
const express = require('express')
const cors = require('cors')
const app = express()//Simple Usage (Enable All CORS Requests)
app.use(cors())//Enable CORS for a Single Route
app.get('/products/:id', cors(), function (req, res, next) {
res.json({msg: 'This is CORS-enabled for a Single Route'})
})

6. Moment: A lightweight JavaScript date library for parsing, validating, manipulating, and
formatting dates.
var now = "04/09/2013 15:00:00";
var then = "02/09/2013 14:20:30";var ms = moment(now, "DD/MM/YYYY
HH:mm:ss").diff(moment(then, "DD/MM/YYYY HH:mm:ss"));var d = moment.duration(ms);
var s = d.format("hh:mm:ss");

7. Morgan: HTTP request logger middleware for node.js. Including the preset tiny as an
argument to morgan() will use its built-in method, identify the URL, declare a status, and the
request’s response time in milliseconds.
const express = require('express');const morgan = require('morgan');const app =
express();app.use(morgan('tiny'));

8. Validator: A node module for a library of string validators and sanitizers.


const validator = require('validator')// Check whether given email is valid or not
var email = '[email protected]'
console.log(validator.isEmail(email)) // trueemail = 'test@'
console.log(validator.isEmail(email)) // false// Check whether string is in lowercase or not
var name = 'geeksforgeeks'
console.log(validator.isLowercase(name)) // truename = 'GEEKSFORGEEKS'
console.log(validator.isLowercase(name)) // false// Check whether string is empty or not
var name = ''
console.log(validator.isEmpty(name)) // truename = 'geeksforgeeks'
console.log(validator.isEmpty(name)) // false// Other functions also available in
// this module like isBoolean()
// isCurrency(), isDecimal(), isJSON(),
// isJWT(), isFloat(), isCreditCard(), etc.

9. Async: Async is a utility module that provides straight-forward, powerful functions for working
with asynchronous JavaScript.

Many helper methods exist in Async that can be used in different situations, like series, parallel,
waterfall, etc. Each function has a specific use case, so take some time to learn which one will
help in which situations.
async.series([
function(callback) {
// do some stuff ...
callback(null, 'one');
},
function(callback) {
// do some more stuff ...
callback(null, 'two');
}
],
// optional callback
function(err, results) {
// results is now equal to ['one', 'two']
}
);//============================================================async.parallel({
one: function(callback) {
...
},
two: function(callback) {
...
},
...
something_else: function(callback) {
...
}
},
// optional callback
function(err, results) {
// 'results' is now equal to: {one: 1, two: 2, ..., something_else: some_value}
}
);//============================================================async.waterfall([
function(callback) {
callback(null, 'one', 'two');
},
function(arg1, arg2, callback) {
// arg1 now equals 'one' and arg2 now equals 'two'
callback(null, 'three');
},
function(arg1, callback) {
// arg1 now equals 'three'
callback(null, 'done');
}
], function(err, result) {
// result now equals 'done'
});

10. Mongoose: It is a MongoDB ODM (object data modeling) tool designed to work in an
asynchronous environment. this package enables you to easily connect to a MongoDB database
using Node.js.
const mongoose = require('mongoose');const connectDB = async () => {
mongoose
.connect('mongodb://localhost:27017/playground', {
useCreateIndex: true,
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false
})
.then(() => console.log('Connected Successfully'))
.catch((err) => console.error('Not Connected'));
} module.exports = connectDB;

11. Mysql: The mysql package enables you to easily connect to a MySQL database using Node.js.
var mysql = require('mysql');var connection = mysql.createConnection({
host : 'localhost',
database : 'dbname',
user : 'username',
password : 'password',
});

connection.connect(function(err) {
if (err) {
console.error('Error connecting: ' + err.stack);
return;
}

console.log('Connected as id ' + connection.threadId);


});

connection.query('SELECT * FROM employee', function (error, results, fields) {


if (error)
throw error;

results.forEach(result => {
console.log(result);
});
});

connection.end();

12. Nodemailer: This module enables e-mail sending from Node.js applications.
"use strict";
const nodemailer = require("nodemailer");async function main() {
let testAccount = await nodemailer.createTestAccount();
let transporter = nodemailer.createTransport({
host: "smtp.ethereal.email",
port: 587,
secure: false, // true for 465, false for other ports
auth: {
user: testAccount.user, // generated ethereal user
pass: testAccount.pass, // generated ethereal password
},
}); let info = await transporter.sendMail({
from: '"Fred Foo " <[email protected]>', // sender address
to: "[email protected], [email protected]", // list of receivers
subject: "Hello ✔", // Subject line
text: "Hello world?", // plain text body
html: "<b>Hello world?</b>", // html body
});console.log("Message sent: %s", info.messageId);
// Message sent: <[email protected]>
console.log("Preview URL: %s", nodemailer.getTestMessageUrl(info));

}main().catch(console.error);

13. Bcrypt: The bcrypt NPM package is a JavaScript implementation of the bcrypt
password hashing function that allows you to easily create a hash out of a password
string.

Hashing is a one-way ticket to data encryption. Hashing performs a one-way transformation on a


password, turning the password into another String, called the hashed password. Hashing is called
one way because it’s practically impossible to get the original text from a hash.
const bcrypt = require("bcrypt");const express = require("express");const User =
require("./userModel");const router = express.Router();// signup route
router.post("/signup", async (req, res) => {
const body = req.body;if (!(body.email && body.password)) {
return res.status(400).send({
error: "Data not formatted properly"
});
}// creating a new mongoose doc from user data
const user = new User(body); // generate salt to hash password
const salt = await bcrypt.genSalt(10); // now we set user password to hashed password
user.password = await bcrypt.hash(user.password, salt);
user.save().then((doc) => res.status(201).send(doc));
});// login route
router.post("/login", async (req, res) => {
const body = req.body;
const user = await User.findOne({
email: body.email
});
if (user) {
// check user password with hashed password stored in the database
const validPassword = await bcrypt.compare(body.password, user.password);
if (validPassword) {
res.status(200).json({
message: "Valid password"
});
} else {
res.status(400).json({
error: "Invalid Password"
});
}
} else {
res.status(401).json({
error: "User does not exist"
});
}
});module.exports = router;module.exports = router;
14. Express-rate-limit: Basic rate-limiting middleware for Express. Use to limit repeated
requests to public APIs and/or endpoints such as password reset.
const express = require('express');const rateLimit = require('express-rate-limit');const
limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // Limit each IP to 100 requests per `window` (here, per 15 minutes)
standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
legacyHeaders: false, // Disable the `X-RateLimit-*` headers
})// Apply the rate limiting middleware to all requests
app.use(limiter)

15. Response-time: This module creates a middleware that records the response time for
requests in HTTP servers. The “response time” is defined here as the elapsed time from when a
request enters this middleware to when the headers are written out to the client.
const express = require('express');const responseTime = require('response-time');const app =
express()app.use(responseTime())app.get('/', function(req, res) {
res.send('hello, world!')
})

16. connect-busboy: busboy is a streaming parser for HTML form data for node.js.
const express = require('express');const busboy = require('connect-busboy');const path =
require('path');const fs = require('fs-extra');const app = express();app.use(busboy({
highWaterMark: 2 * 1024 * 1024, // Set 2MiB buffer
}));const uploadPath = path.join(__dirname,
fs.ensureDir(uploadPath);app.route('/upload').post((req, res, next) =>
{req.pipe(req.busboy); // Pipe it trough busboyreq.busboy.on('file', (fieldname, file,
filename) => {
console.log(`Upload of '${filename}' started`);const fstream =
fs.createWriteStream(path.join(uploadPath, filename));// Pipe it trough
file.pipe(fstream);
fstream.on('close', () => {
console.log(`Upload of '${filename}' finished`);
res.redirect('back');
});
});
});const server = app.listen(3200, function() {
console.log(`Listening on port ${server.address().port}`);
});

17. Google-auth-library: This is Google’s officially supported node. js client library for using
OAuth 2.0 authorization and authentication with Google APIs.
const { OAuth2Client } = require("google-auth-library");async function
googleSignInUser(request, response) {
const client = new OAuth2Client(process.env.GOOGLE_CLIENT_ID);
const { idToken } = request.body;client
.verifyIdToken({ idToken, audience: process.env.GOOGLE_CLIENT_ID })
.then((res) => {
const { email_verified, name, email } = res.payload;
if (email_verified) {
User.findOne({ email }).exec((err, user) => {
if (user) {
const { _id, email, fullName } = user;const token = jwt.sign({ email: email },
process.env.SECRET_KEY, {
expiresIn: process.env.EXPIRE_IN,
});return response.status(200).json({
accessToken: token,
user: { _id, email, fullName },
});
} else {
const password = email + process.env.SECRET_KEY;bcrypt.hash(password, 12, async
(err, passwordHash) => {
if (err) {
response.status(500).send("Couldn't hash the password");
} else if (passwordHash) {
return User.create({
email: email,
fullName: name,
hash: passwordHash,
}).then((data) => {
const { _id, email, fullName } = data;
const token = jwt.sign(
{ email: email },
process.env.SECRET_KEY,
{ expiresIn: process.env.EXPIRE_IN }
);response.status(200).json({
accessToken: token,
user: { _id, email, fullName },
});
});
}
});
}
});
} else {
return res.status(400).json({
error: "Google login failed. Try again",
});
}
});
}
18. Redis: Redis is a super fast and efficient in-memory, key-value cache and store. It’s also
known as a data structure server, as the keys can contain strings, lists, sets, hashes, and other data
structures.
const redis = require("redis");const client = createClient();client.on('error', (err) =>
console.log('Redis Client Error', err));await client.connect();await client.set('key',
'value');
const value = await client.get('key');

19. Joi: The most powerful schema description language and data validator for JavaScript.
const Joi = require('joi'); app.post('/blog', async (req, res, next) => {
const { body } = req; const blogSchema = Joi.object().keys({
title: Joi.string().required
description: Joi.string().required(),
authorId: Joi.number().required()
}); const result = Joi.validate(body, blogShema);
const { value, error } = result;
const valid = error == null; if (!valid) {
res.status(422).json({
message: 'Invalid request',
data: body
})
} else {
const createdPost = await api.createPost(data);
res.json({ message: 'Resource created', data: createdPost })
}
});
20. Winston: Winston, is one of the best logging middleware. Logging is a process of recording
information generated by application activities into log files. Messages saved in the log file are
called logs. A log is a single instance recorded in the log file.

A log is the first place to look as a programmer, to track down errors and flow of events, especially
from a server. A log tells you what happens when an app is running and interacting with your
users. A great use case for logging would be if, for example, you have a bug in your system, and you
want to understand the steps that led up to its occurrence. Let's take an example of the custom
logger.js
const { createLogger, format, transports, config } = require('winston');

const usersLogger = createLogger({

levels: config.syslog.levels,
format: combine(
timestamp({
format: 'YYYY-MM-DD HH:mm:ss'
}),

transports: [
new transports.File({ filename: 'users.log' })
]
});
const transactionLogger = createLogger({
transports: [
new transports.File({ filename: 'transaction.log' })
]
});

module.exports = {
usersLogger: usersLogger,
transactionLogger: transactionLogger
};

21. node-fetch: A light-weight module that brings Fetch API to node.js.


const fetch = require('node-fetch');fetch('https://api.github.com/users/github')
.then(res => res.json())
.then(json => console.log(json));

22. WS (WebSocket library): It is a simple to use, blazing-fast, and thoroughly tested


WebSocket client and server implementation.
const WebSocketServer = require('ws');const wss = new WebSocketServer({
port: 8080
});wss.on('connection', function connection(ws) {
ws.on('message', function message(data) {
console.log('received: %s', data);
});ws.send('something');
});

23. loadtest: Runs a load test on the selected HTTP or WebSockets URL. The API allows for easy
integration in your own tests.
$ loadtest [-n requests] [-c concurrency] [-k] URL
$ loadtest -n 100000 -c 10000 http://localhost:9090/
24. i18next: i18next is a very popular internationalization framework for browsers or any other
javascript environment (eg. Node.js, Deno).
const http = require('http');const path = require('path');const { I18n } = require('i18n');

const i18n = new I18n({


locales: ['en', 'de'],
directory: path.join(__dirname, 'locales')
})

const app = http.createServer((req, res) => {


i18n.init(req, res)
res.end(res.__('Hello'))
})

app.listen(3000, '127.0.0.1')

25. jsonwebtoken: JWT, or JSON Web Token, is an open standard used to share security
information between two parties — a client and a server.
app.post("/login", async (req, res) => {
try {
const {
email,
password
} = req.body;// Validate user input
if (!(email && password)) {
res.status(400).send("All input is required");
}
// Validate if user exist in our database
const user = await User.findOne({
email
});if (user && (await bcrypt.compare(password, user.password))) {

// Create token
const token = jwt.sign({
user_id: user._id,
email
},
process.env.TOKEN_KEY, {
expiresIn: "2h",
}
);// save user token
user.token = token;// user
res.status(200).json(user);
}
res.status(400).send("Invalid Credentials");
} catch (err) {
console.log(err);
}
// Our register logic ends here
});

26. Cookie-parser: cookie-parser is a middleware that parses cookies attached to the client
request object. To use it, we will require it in our index. js file; this can be used the same way as we
use other middleware
const Express = require('express');
const app = Express();
const port = 80;const CookieParser = require('cookie-parser');
app.use(CookieParser());app.get("/send", (req, res) => {
res.cookie("loggedin", "true");
res.send("Cookie sent!");
});app.get("/read", (req, res) => {let response = "Not logged in!";if (req.cookies.loggedin
== "true") {
response = "Yup! You are logged in!";
}res.send(response);
});app.listen(port, () => {
console.log("Server running!");
});

27. Config: Node-config organizes hierarchical configurations for your app deployments.
npm install config

Create a config directory and add a config/default.json file to it. This will be the default config file
and will contain all your default environment variables.
{
"server": {
"host": "localhost",
"port": 8080,
}
}

To use the config file :


const express = require('express');
const config = require('config');
const app = express();
const port = config.get('server.port');
const host = config.get('server.host');

app.get('/', (req, res) => {


res.send('Hello World');
});
const server = app.listen(port, host, (err) => {
if (err) {
console.log(err);
process.exit(1);
}
console.log(`Server is running on ${host}:${server.address().port}`);
});

28. Supertest: SuperTest is a Node. js library that helps developers test APIs. It extends
another library called superagent, a JavaScript HTTP client for Node. js and the browser.
Developers can use SuperTest as a standalone library or with JavaScript testing frameworks like
Mocha or Jest.
const request = require('supertest');
const app = require('/app');describe('Testing POSTS/shots endpoint', function() {
it('respond with valid HTTP status code and description and message', function(done) {
const response = await request(app).post('/shots').send({
title: 'How to write a shot',
body: "Access the Edpresso tutorial"
});expect(response.status).toBe(200);
expect(response.body.status).toBe('success');
expect(response.body.message).toBe('Shot Saved Successfully.');
});
});

29.Multer: Multer is a node.js middleware for handling, which is primarily used for uploading
files. It is written on top of the busboy for maximum efficiency.
// upload.js
const multer = require("multer");
const path = require("path");const storage = multer.diskStorage({
destination: (req, file, cb) => {
cb(null, "./public/uploads/images/");
},
filename: (req, file, cb) => {
cb(null, Date.now() + file.originalname);
},
});const fileFilter = (req, file, cb) => {
if (file.mimetype === "image/jpeg" || file.mimetype === 'image/jpg' || file.mimetype ===
"image/png") {
cb(null, true);
} else {
cb(null, false);
}
};module.exports = multer({
storage: storage,
limits: {
fileSize: 1024 * 1024 * 5,
},
fileFilter: fileFilter,
});// Use this middleware
app.post('/uploadfile', upload.single('myFile'), (req, res, next) => {
const file = req.file
if (!file) {
const error = new Error('Please upload a file')
error.httpStatusCode = 400
return next(error)
}
res.send(file)})

30. Compression: It is a Node.js compression middleware. Compression in Node. js and


Express decreases the downloadable amount of data that is served to users.
const compression = require('compression');const express = require('express');const app =
express()// compress all responses
app.use(compression())

Hope You Like This Article And It Will Help You In Your Upcoming Projects.

Happy Learning!!!!

How to Convert an Audio File into Video in NodeJS

Power of FFmpeg with NodeJS


Photo by Kyle Loftus: https://www.pexels.com/photo/silhouette-of-man-standing-in-front-of-microphone-3379934/

Converting audio files into video files is an everyday use case in the current age of content
production.
While there are many ways to do it via some custom websites, we programmers don’t follow that
easy, simple path, right?

Today, I will show you how to convert an audio file into a video file in NodeJS.

What Will We Use?

We will use the power of FFmpeg. In their documentation, they identify themselves as:

A complete, cross-platform solution to record, convert and stream


audio and video.

This is not something specific to NodeJS. Instead, it’s OS-level documentation that you can install
on your machine by running the following commands on Linux
sudo apt updatesudo apt install ffmpeg

And verifying the version


ffmpeg -version
If you are on macOS, it’s even easier.
brew install ffmpeg

If you want to learn how to use FFMpeg in Docker, you can check the following article.

How to Use FFmpeg with Node.js and Docker


How to work with audio and video files in Node.js easily
javascript.plainenglish.io

The problem with NodeJS

The problem is accessing the FFmpeg directly from NodeJS can be tricky. However, several
libraries create an abstraction on top of the FFmpeg.

Some of the most notable ones are fluent-ffmpeg and ffcreator .

Today we will use the light version of ffcreator which is called ffcreatorlite

Let’s get started


I am assuming you already have a NodeJS project up and running. If not, then you can use the
following boilerplate.

GitHub - Mohammad-Faisal/nodejs-typescript-skeleton: This is a skeleton project using nodejs and…


This is a skeleton project using nodejs and typescript - GitHub - Mohammad-Faisal/nodejs-typescript-skeleton: This is a…
github.com

Just run the following command:


git clone https://github.com/Mohammad-Faisal/nodejs-typescript-skeleton.git

It will give you a basic NodeJS project.

Install Dependencies

Add the required dependency


yarn add ffcreatorlite

Then add an audio file to the project. You will probably also want a cover image for your generated
video, right? So bring that in too.
|- src
|----index.ts
|----source.mp3 // your audio file
|----cover.png // your cover image

That’s the preparation. Let’s build it.

Get the Individual Functions

First, you will need an instance of the FFcreator


import path from 'path';
import { FFScene, FFImage, FFCreator } from 'ffcreatorlite';

const CANVAS_WIDTH = 1246; // play with the dimensions. I am creating a 16:9 canvas for youtube videos
const CANVAS_HEIGHT = 700;
const VIDEO_DURATION = 30;

const getCreatorInstance = () => {


const outputDir = path.join(__dirname, '../assets/output/'); // you can add anything
const cacheDir = path.join(__dirname, '../assets/cache/');

return new FFCreator({


cacheDir,
outputDir,
width: CANVAS_WIDTH,
height : CANVAS_WIDTH,
});
}
Now add a function for adding the Audio:
const
addAudio =
(creator:
FFCreator)
=> {
const audio = `./source.mp3` // adding the audio
creator.addAudio(audio);
return creator
};
Another function to add the cover image:
addCoverImage
= (creator:
FFCreator) =>
{
const coverImagePath = './cover.png';

const scene = new FFScene();


scene.setDuration(VIDEO_DURATION);
const backgroundImage = new FFImage({
path: coverImagePath,
x: 0,
y: 0,
});
scene.addChild(backgroundImage);
creator.addChild(scene);
return creator
};

Let’s Combine Them All!

Now let’s combine these functions to create our video file from the audio file.
const
generateVideoFromAudioFile
= async ():
Promise<string> => {
return new Promise((resolve, reject) => {

let creator = getCreatorInstance();


creator = addAudio(creator);
creator = addCoverImage(creator);

creator.start();
creator.closeLog();

creator.on('start', () => {
console.log(`FFCreator start`);
});

creator.on('error', (e: any) => {


console.log(`FFCreator error: ${e.error}`);
reject(e);
});
creator.on('progress', (e: any) => {
console.log(`FFCreator progress: ${(e.percent * 100) >> 0}%`);
});

creator.on('complete', async (e: any) => {


console.info(`FFCreator completed: \n USEAGE: ${e.useage} \n PATH: ${e.output} `);
resolve(e.output);
});
});
};

The syntax is a little weird because of the structure of the ffcreatorlite library, but what this
essentially does:
1. Creates a creator instance2. Adds a audio3. Adds a cover image4. Starts the process and
wait's for it's completion5. after completing returns the generated video file path

So, now you can run the function like the following:
await generateVideoFromAudioFile();

And after everything is finished, you will see a generated video file randomid.mp4 inside your
project, which you can use any way you like.
Final Thoughts

I have shown a minimal use case that is possible to do with this awesome library. There are a lot of
things that you can do with ffcreatorlite and ffcreator library like adding multiple images with
transition and everything.

I encourage you to try that.

That’s it for today. Have a great day!


Want to Connect?You can reach out to me via LinkedIN or my Personal Website

Deploy Nodejs Backend App on Digital Ocean

1. Create A Droplet
In digital ocean, click on the create button and select the droplet option
- Choose an image (I will go with Ubuntu)
- Choose a plan based on your project needs
- Choose a datacenter region
- In Authentication Part, click on New SSH Key and continue with second step of this article to
create an SSH Key
2. Download putty.exe and puttygen.exe to generate a key, save settings and
connect to the server/droplet easily later on

https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html

• Run puttygen.exe to create SSH key


- Click on generate and move the mouse over the blank area to create some randomness

• Save public key as a txt file in a safe folder

• Save private key as a .ppk file in the same folder with public key

• Copy public key and paste it in SSH Key Content area in digital ocean

• In droplet creating page, if you need, select additional options and finalize and create the
droplet.

3. Connect Via Putty


• Open up putty.exe

• Type in your droplet ip address in Host Name field


• Click on the connection->data tab and add ‘root’ in Auto-Login Username field

• Click on the SSH->Auth tab and browse for your private key file (.ppk)

• Go back to the session tab and give a name to the session in Saved Sessions field and click
on save button.

• Click on open button to connect to the server/droplet

4. Create New User and Authorize Key For New User

In terminal, write the command below to create a user with name ‘newuser’. You can write any
name you want.

• adduser newuser (create user — save password)

• usermod -aG sudo newuser (add the user to the sudo group)

• sudo su - newuser (login as the new user)

• mkdir ~/.ssh (create ssh directory)

• chmod 700 ~/.ssh


• nano ~/.ssh/authorized_keys (open up authorized_keys file)

• copy and paste your public key in form below without linebreaks
ssh-rsa your-public-key

• ctrl-x -> ctrl-y ->enter to save file and exit

• chmod 600 ~/.ssh/authorized_keys

• sudo service ssh restart


enter your password you have created when adding new user

• in putty click on session name you saved and click on load button

• in connection->data tab, change Auto-Login Username to new username

• go back to session tab and click on save and then open

• now you have logged in as a new user without entering any password

5. Disable Root and Password Login


• sudo nano /etc/ssh/sshd_config
• change the following lines
PermitRootLogin no
PasswordAuthentication no

• save and exit (ctrl-x>ctrl-y>enter)

• sudo systemctl reload sshd (to reload ssh with this command)

6. Install Nodejs, Npm and Git


• sudo apt update

• sudo apt install nodejs

• sudo apt install npm

• sudo apt-get install git

7. Github Permission and Clone the Project


• ssh-keygen -t rsa -C “your github email” (with quotation marks to enable permission for
github repo cloning)
• git clone <your-github-project>

• cd your-project-directory

• npm install

• node server.js/index.js (run your node project)

• you can check your app running in droplet-ip-adress:port like 123.123.12.12:5000

• ctrl-c to stop running node

• sudo npm install -g pm2 (install pm2 to run project automatically)

• pm2 start server.js/index.js (run project)

• project has been running but on ip address. In order to run it on a domain name, you need to
arrange dns settings to redirect that domain to digital ocean servers

• go to your domain provider site and open up dns settings and change them like this one
• In digital ocean, go to networking

• in domain tab, enter your domain name and add it

• in create new record page and A tab, enter @ in hostname field and select your droplet
in will direct to field and create record

• in same page, go to CNAME tab, enter www in hostname field, enter @ in is an alias
of field and create record

• now you check the domain name with port stated in project server.js/index.js file

• in order to remove port part and use domain name only, apply the following steps
• pm2 stop server.js/index.js

• sudo nano server.js/index.js

• change port to 80 and save it

• sudo apt-get install libcap2-bin


sudo setcap 'cap_net_bind_service=+ep' `which node`

• pm2 start server.js/index.js

Now it is running in the domain name you have added without port number specified.

Authentication and Authorization with JWTs in Node &&


Express.js

In this tutorial, we’ll learn how to build an authentication system for a Nodejs & Express
application using JWT.
We’ll be working on the project of this tutorial Build an API using Node, Express, MongoDB, and
Docker . You can find the code source for this tutorial here.

What is Authentication and Authorization?

Simply, authentication is the process of verifying the identity of someone.

Authorization is the process of verifying what data the user can have access to.

And authorization only occurs when you’ve been authenticated. Then, the system will grant you
access to the files you need.

Setup the project

First of all, clone the project.


git clone https://github.com/koladev32/node-docker-tutorial.git

Once it’s done, go inside the project and run.


yarn install

Start the project using :


yarn start
Inside the root of the project, create a .env file.
// .env
JWT_SECRET_KEY=)a(s3eihu+iir-_3@##ha$r$d4p5%!%e1==#b5jwif)z&kmm@7

You can easily generate a new value for this secret key online here.

Creating the User model

Let’s create the User model. But first, we need to define a type for this model.
// src/types/user.ts
import { Document } from "mongoose";

export interface IUser extends Document {


username: string;
password: string;
isAdmin: boolean;
}

Great, then we can write the User model.


// src/models/user.ts

import { IUser } from "../types/user";


import { model, Schema } from "mongoose";

const userSchema: Schema = new Schema(


{
username: {
type: String,
required: true,
unique: true,
},
password: {
type: String,
required: true,
},
isAdmin: {
type: Boolean,
required: false,
default: false,
},
},
{ timestamps: true }
);

export default model<IUser>("user", userSchema);

The User model is created. We can go and start writing the Login and Register controllers.

Registration

Go to the controllers directory and create a new directory users which will contain a
new index.ts file.

Let write the registerUser controller.


// src/controllers/users/index.ts
import { Response, Request } from "express";
import { IUser } from "../../types/user";
import User from "../../models/user"
const bcrypt = require("bcrypt");
const jwt = require("jsonwebtoken");
let refreshTokens: string[] = [];

const registerUser = async (


req: Request,
res: Response
): Promise<e.Response<any, Record<string, any>>> => {
try {
const { username, password } = req.body;
if (!(username && password)) {
return res.status(400).send("All inputs are required");
}

// Checking if the user already exists

const oldUser = await User.findOne({ username });

if (oldUser) {
return res.status(400).send("User Already Exist. Please Login");
}

const user: IUser = new User({


username: username,
});

const salt = await bcrypt.genSalt(10);


// now we set user password to hashed password
user.password = await bcrypt.hash(password, salt);

user.save().then((doc) => {
// Generating Access and refresh token
const token = jwt.sign(
{ user_id: doc._id, username: username },
process.env.JWT_SECRET_KEY,
{
expiresIn: "5min",
}
);

const refreshToken = jwt.sign(


{ user_id: doc._id, username: username },
process.env.JWT_SECRET_KEY
);

refreshTokens.push(refreshToken);

return res.status(201).json({
user: doc,
token: token,
refresh: refreshToken,
});
});

return res.status(400).send("Unable to create user");


} catch (error) {
throw error;
}
};

export {registerUser};

What are we doing here?


• Check that the required fields have been provided

• Check that there is no existing user with the same username

• Creating the user and encrypting the password

• Generating refresh and access tokens

• Send responses

But why a refresh and an access token?

When the token expires, the intuitive way to claim a new access token will be to log in again. But
this is not effective at all for the experience of possible users. Then instead of login in again, the
client can claim a new access token by making a request with the refresh token obtained at login or
registration. We’ll write the routes for this later.

Now, let’s add this controller to the routes and register the new routes in our application.
// src/routes/index.ts

import { Router } from "express";


import {
getMenus,
addMenu,
updateMenu,
deleteMenu,
retrieveMenu,
} from "../controllers/menus";
import {
registerUser
} from "../controllers/users";

const menuRoutes: Router = Router();

const userRoutes: Router = Router();

// Menu Routes

menuRoutes.get("/menu", getMenus);
menuRoutes.post("/menu", addMenu);
menuRoutes.put("/menu/:id", updateMenu);
menuRoutes.delete("/menu/:id", deleteMenu);
menuRoutes.get("/menu/:id", retrieveMenu);

// User Routes

userRoutes.post("/user/register", registerUser);

export { menuRoutes, userRoutes };

And inside the app.ts file, let's use the new route.
// src/app.ts

import { menuRoutes, userRoutes } from "./routes";


...
app.use(cors());
app.use(express.json());

app.use(userRoutes);
...

The endpoint is available at localhost:4000/user/register.

Login
Inside the index.ts file of users controllers, let's write the login function.
// src/controllers/users/index.ts

const loginUser = async (


req: Request,
res: Response
): Promise<e.Response<any, Record<string, any>>> => {
try {
const { username, password } = req.body;
if (!(username && password)) {
return res.status(400).send("All inputs are required");
}

// Checking if the user exists

const user: IUser | null = await User.findOne({ username });

if (user && (await bcrypt.compare(password, user.password))) {


// Create token
const token = jwt.sign(
{ user_id: user._id, username: username },
process.env.JWT_SECRET_KEY,
{
expiresIn: "5min",
}
);

const refreshToken = jwt.sign(


{ user_id: user._id, username: username },
process.env.JWT_SECRET_KEY
);

refreshTokens.push(refreshToken);
// user
return res.status(200).json({
user: user,
token: token,
refresh: refreshToken,
});
}

return res.status(400).send("Invalid Credentials");


} catch (error) {
throw error;
}
};

export { registerUser, loginUser };

So what are we doing here?

• Check that the required fields have been provided

• Check that the user exists

• Compare the password and create new tokens if everything is right

• Then send responses

If these validations are not done, we send error messages as well.


Add it to the routes and log in using https://localhost:4500/user/login.
// src/routes/index.ts

...
userRoutes.post("/user/login", loginUser);
...

Protecting the Menu resources

Ah great. The Login endpoint is done, the registering endpoint also is done. But the resources are
not protected. You can still access them and because we need to write a middleware.

A middleware is a function that is used to that acts as a bridge between a request and a function to
execute the requests.

Create a new directory named middleware inside src and create a file index.ts.

Great, let’s write our middleware.


// src/middleware/index.ts

import e, { Response, Request, NextFunction } from "express";


import { IUser } from "../types/user";

const jwt = require("jsonwebtoken");


const authenticateJWT = async (
req: Request,
res: Response,
next: NextFunction
): Promise<e.Response<any, Record<string, any>>> => {
const authHeader = req.headers.authorization;

if (authHeader) {
const [header, token] = authHeader.split(" ");

if (!(header && token)) {


return res.status(401).send("Authentication credentials are required.");
}

jwt.verify(token, process.env.JWT_SECRET_KEY, (err: Error, user: IUser) => {


if (err) {
return res.sendStatus(403);
}

req.user = user;
next();
});
}
return res.sendStatus(401);
};

export default authenticateJWT;

What are we doing here?


• Making sure there are authorization headers. We actually want the values of this header to this
format: ‘Bearer Token’.

• Verifying the token and then creating a new key with user as value. req.user = user

• And finally using next() to execute the next function.

Now, let’s use the middleware in our application.


// src/app.ts

import authenticateJWT from "./middleware";


...

app.use(userRoutes);

app.use(authenticateJWT);

app.use(menuRoutes);
...

Did you notice something? The middleware is placed after the userRoutes and before menuRoutes.
Well, going like this, node & express will understand that the userRoutes are not protected and also
that all the routes after the authenticateJWT will require an access token.
To test this, make a GET request to http://localhost:4000/menus without authorization header.
You'll receive a 401 error. Then use the access token from your previous login and add it to the
authorization header. You should retrieve the menus.

Refresh token

It’s time now to write the refresh token controller.


// src/controllers/users/index.ts

const retrieveToken = async (


req: Request,
res: Response
): Promise<e.Response<any, Record<string, any>>> => {
try {
const { refresh } = req.body;
if (!refresh) {
return res.status(400).send("A refresh token is required");
}

if (!refreshTokens.includes(refresh)) {
return res.status(403).send("Refresh Invalid. Please login.");
}

jwt.verify(
refresh,
process.env.JWT_SECRET_KEY,
(err: Error, user: IUser) => {
if (err) {
return res.sendStatus(403);
}
const token = jwt.sign(
{ user_id: user._id, username: user.username },
")a(s3eihu+iir-_3@##ha$r$d4p5%!%e1==#b5jwif)z&kmm@7",
{
expiresIn: "5min",
}
);

return res.status(201).send({
token: token,
});
}
);

return res.status(400).send("Invalid Credentials");


} catch (error) {
throw error;
}
};

So what are we doing here?

• Making sure that the refresh token exists in the body

• Making sure that the refresh token exists in the memory of the server

• And finally verifying the refresh token then sending a new access token.
Add this new controller to the userRoutes.
// src/routes/index.ts
...
userRoutes.post("/user/refresh", retrieveToken);
...

You can hit http://localhost:4000/user/refresh to retrieve a new access token.


Logout

But there is a problem. If the refresh token is stolen from the user, someone can use it to generate
as many new tokens as they’d like. Let’s invalidate this.
// src/controllers/users/index.ts
...
const logoutUser = async (
req: Request,
res: Response
): Promise<e.Response<any, Record<string, any>>> => {
try {
const { refresh } = req.body;
refreshTokens = refreshTokens.filter((token) => refresh !== token);

return res.status(200).send("Logout successful");


} catch (error) {
throw error;
}
};

export { registerUser, loginUser, retrieveToken, logoutUser };

And a new route to log out.


// src/routes/index.ts

import {
loginUser,
logoutUser,
registerUser,
retrieveToken,
} from "../controllers/users";
...
userRoutes.post("user/logout", logoutUser);
...

You can hit http://localhost:4000/user/logout to invalidate the token.

And voilà, we’re done. �

Conclusion

In this article, we’ve learned how to build an authentication system for our Node & Express
application using JWT.

And as every article can be made better so your suggestion or questions are welcome in the
comment section.

Check the code of this tutorial here.

5 unusual JavaScript tips that make your life easier.

Write better code using these five simple yet unusual JavaScript tips.
Photo by David Nicolai on Unsplash

Too many articles about JavaScript tips only cover the basics of Array functions or obvious
improvements to your code. This article will go more in-depth, helping you improve the code
you’re writing daily.
1. Wait for … anything

Sometimes, you want to wait for something to happen. And while this task can become complex
(e.g., using a non-blocking loop), there’s a simple solution for most of your waiting problems:
Promises.

Promises can resolve after a given timeout:


new Promise((resolve) => {
setTimeout(() => {
// DO Something
resolve();
}, 1000);
});

This promise will resolve after about 1 second. You can also store it in a variable and use await to
block for a second (beware of potential UX issues). And while you can find a use case for the
snippet above, it implies a much more helpful trick.

You can use promises as semaphores: Sometimes, you want to execute an asynchronous, long-
running process. But a user could trigger this process again and again. So you want to ensure that
the running process has to finish before your users can start it again. Here’s how:
let processStatus = null;
function myProcess() {
if (processStatus) {
return;
} processStatus = new Promise(resolve => {
// Do some heavy lifting, for example
setTimeout(() => {
// pretending a long running action
resolve();
}, 5000);
})
.then(() => {
processStatus = null;
});
}

A user can only click this when there’s no active process. It’s helpful to avoid multiple fetches for
the same data.

2. Optimize loops in JavaScript using async

I see many people using the forEach functions on JavaScript Arrays, but the majority of them
aren’t aware of its true power: Async loops.
const asyncArr = [
new Promise(resolve => setTimeout(resolve.bind(this, 1), 2000)),
new Promise(resolve => setTimeout(resolve.bind(this, 2), 500)),
new Promise(resolve => setTimeout(resolve.bind(this, 3), 5000)),
new Promise(resolve => setTimeout(resolve.bind(this, 4), 1000)),
];asyncArr.forEach(async (el) => {
const i = await el;
console.log(i);
});// logs: 2, 4, 1, 3
While this is also possible using a for-of-loop, it reads much more elegant using await. Here’s the
same example using a for-of-loop.
for (const el of asyncArr) {
el.then(console.log);
}

Don’t let the fact fool you that the for-of-loop is more concise. In this example, we don’t do any
computation with i. However, imagine doing more computation in the function body of then used
by the for-const-loop.

There are two issues you might hit with the forEach interface here:

First, it doesn’t have a return value. That means you alter your original array, or you do not alter
the array at all. If you decide to modify the array, you are producing side effects that are probably
hard to debug if you don’t change it at all — fine.

Second, it’s unstable. There’s no way I can log results in the order of the original array without
looping synchronously through all elements. The latter option means that the loop would run
much slower. To avoid this, we can use Promise.all in combination with a map. This will result in a
new array with the values received by our asynchronous calls in the same order as the original
array.
const asyncArr = [
new Promise(resolve => setTimeout(resolve.bind(this, 1), 2000)),
new Promise(resolve => setTimeout(resolve.bind(this, 2), 500)),
new Promise(resolve => setTimeout(resolve.bind(this, 3), 5000)),
new Promise(resolve => setTimeout(resolve.bind(this, 4), 1000)),
]

Promise.all(asyncArr)
.then(console.log);// logs: [1,2,3,4] after 5 seconds

Sweet! It won’t cause side effects since it returns a new array (except if the functions
in asyncArr cause side effects — then you’ve lost) and it will return stable results!

3. Avoid using else when possible

This tip is simple yet powerful. There are many situations where you’ll see yourself writing an else
block when it could be avoided with 2 seconds of thinking. So let me introduce you to some
situations where it is unnecessary.

3.1 Early returns


Making use of early return statements can help you to eliminate the first set of unnecessary else
blocks. See the example below:
function myFun() {
if (x > 10) {
// do something
} else {
// do something else
} return someVar;
}

This can be refactored to


function myFun() {
if (x > 10) {
// do something
return someVar;
} // do something else
return someVar;
}

You might object: The examples do not differ much in size — why bother? It’s not about size — it’s
about readability and reducing complexity.

Any if and else statement will increase the complexity of your functions. When encountering an
else block, how long is it? How long was the if-block? Do you still remember where you are when
you read?
There’s a rule of thumb here: Handle errors with if-statements and return as soon as possible.
Then, the function should do what it is supposed to do outside of any if/else.

3.2 Violated the Single Responsibility Principle (SRP)

Junior Developers who come across best practices always ask me: When do I know I violated the
Single Responsibility Principle? When do I realize that my function does more than one thing? If-
else can be an indicator!
function myFun() {
if (x > 10) {
// do something
} else {
// do something else
}

if (y < 100) {
// do something
} else {
// do something else
} return someVar;
}

Especially when having multiple if-“else if”-else blocks in one function, the chances are that you’re
violating the SRP for this function. The example above could be refactored to
function myFunc() {
const xValid = checkX(x);
const yValid = checkY(y);
return xValid && yValid;
}

Of course, the refactoring highly depends on the semantics of your code. However, this would be
one possible way to rewrite the example above to a much cleaner and more readable function.

3.3 Default values

Does this look familiar to you?


let x;
if (someVar === "something") {
x = 1;
} else {
x = somethingElse;
}

Consider this refactoring:


let x = 1;
if (someVar !== "something") {
x = somethingElse;
}

Or even this:
const x = someVar !== "something"
? somethingElse
: 1;

Okay, the last example is using else (kind of). However, we avoided using let which might be a
source of errors.

4. Using Array.from for iterables

Okay, this one is probably the one most of you are familiar with (besides the Promise.all thing,
maybe), but I met so many developers who were unaware of this that I decided to put it on the list.
const as = document.querySelectorAll("a");

What is the specific type of as.

First, you might think it’s an array, but it’s not. Proof?
Array.isArray(as);
// -> false

Therefore, as.map(...) won’t work. Bummer.


Second, Chrome displays it as if it were an array, which may confuse many of you. However, note
the “NodeList(3)”

It means it is a so-called “array-like object” (or an iterable). So whenever you encounter this kind
of object, you can create an array from it.
const asArray = Array.from(as);

And this is indeed an array. Now, you’re able to use map, filter, or any other array function on it.

5. Getting rid of references

References can cause all sorts of side effects in your code. Being aware of when you’re handling a
reference and when you’re merely working with a value is key to write bug-free software. However,
I won’t go into much detail about what a reference is. You’ll mainly experience this behavior when
working with objects and arrays.

Here’s the issue:


const a = { key: "value" };
const b = a;
b.key = "something else";console.log(a.key);

What will be logged? Correct, “something else.”

b is only a reference to a, therefore whenever b changes a referenced object key, then it will also be
reflected on a. We can create a new object b without any references to a using several techniques.

5.1 Destructuring

This one has been most popular for quite some time now. Destructuring removes any references.
const a = { key: "value" };
const b = {...a};
b.key = "something else";console.log(a.key); // logs: value
console.log(b.key); // logs: something else

5.2 Object.assign
Destructuring is syntactic sugar for Object.assign, therefore references can also be removed using
this technique:
const b = Object.assign({}, a);

The outcome would be the same as for destructuring.

5.3 Array.from

If you’re dealing with Arrays, then you can use Array.from to get rid of references. Here’s the issue:
const arr1 = [1,2,3,4];
const arr2 = arr1;
arr2[0] = 5;console.log(arr1); // -> [5, 2, 3, 4]

and it can be solved using Array.from like so


const arr1 = [1,2,3,4];
const arr2 = Array.from(arr1);
arr2[0] = 5;console.log(arr1); // -> [1, 2, 3, 4]
console.log(arr2); // -> [5, 2, 3, 4]

Destructuring also works for arrays, of course. Another thing to notice is: Array.from not only
works on “array-like objects,” but on arrays, too.
5.4 Last resort: JSON.stringify

As a last resort, you can stringify an object and parse it again. All references will be cleared.
const a = { key: "value" };
const b = JSON.parse(JSON.stringify(a));
b.key = "something else";console.log(a.key); // logs: value
console.log(b.key); // logs: something else

However, be aware that JSON.stringify also clears any type information. This may cause you some
trouble with Dates and other objects.

That’s all, folks!

If you liked this article, make sure to clap and follow me to show me that you’d like more of that
stuff. Thank you so much for reading and your support!

Exciting new CSS features in 2022

All major browsers will get many new features in the coming months.
Photo by Callum Hill on Unsplash

All major browsers have agreed on a specific set of features to implement in 2022. The progress of
the so-called “Interop 2022” can be tracked here: https://wpt.fyi/interop-2022?stable. I will tell
you my most anticipated features that will land during Interop 2022.
A new HTML Tag: The dialog Element

I have already written an article on it; you can find it here:

You can finally make use of the HTML dialog element


New safari versions support it now!
towardsdev.com

TLDR; we have a new feature that will help all of us. In most projects, we have to implement a
Modal. Usually, we will use a div and add some open/close logic to it. This has become such a
common pattern that we’ve got a new Element for it: Dialog. See the browser support
here: https://caniuse.com/?search=dialog. Adaption is really good now; keep in mind that Safari
only supports it since version 15.4.

New Viewport Units

If you’ve ever written a cross-platform mobile web app, then you know the struggle. What’s the
height of the users’ actual viewport? Disappearing address bars, software keyboards, and other
weird behaviors (safe zones…)left us in despair. But fear not, my fellow developer. There’s hope, at
last! Behold the new Viewport Units: dvh, lvh, and svh.
https://twitter.com/jensimmons/status/1499441043930062854
The image is self-explanatory. I assume dvh will be a life-saver. Unfortunately, it didn’t land in
Chrome yet: https://caniuse.com/?search=dvh

CSS Subgrids

I love how the grid display type gets more and more attention. It has always been in the shadows
of flexboxes. However, it can be much more powerful when used right.

When doing reviews, I get asked this a lot: “When to use flex vs. grid?”
I have already answered this question in another article about the RAM Layout Pattern. TLDR;
Flexboxes are for one-dimensional layouts; grids are for 2-dimensional layouts. Sometimes it can
be that simple!

RAM — CSS Layout Pattern


Learn about the RAM Layout Pattern.
towardsdev.com

Two things are still notoriously hard to solve using CSS grids.
The first one is masonry grids. Unfortunately, it’s not really possible to do them with CSS-only
solutions, and it’s probably not going to change soon.

The second one is dealing with subgrids, so a grid within a grid. Consider the following example
(original example by web.dev)

If it looks like that on your browser, then you’re probably using Chrome or Safari:

The web.dev Codepen without CSS subgrid support

It should look like that when using a Browser with CSS subgrid support (like Firefox)
The web.dev Codepen with CSS subgrid support

As you can see, this can be a life saver! Having individual columns that share the same height for
their layout is still hard to do in CSS. Here’s the current browser
support: https://caniuse.com/?search=subgrid

CSS Level 5 Color Functions

The feature you’ll learn about now might not seem as helpful as the other features I have shown
you. But I can certainly find some use cases for this one, too!

There are at least two new color functions added to CSS this year, the one I anticipate the most
ist color-mix. Here’s the syntax:
div {
background-color: color-mix(in hsl, red 50%, yellow 50%);
}

color-mix expects the first parameter to be a color space (such as RGB, hsl, lch, or oklch). The
second and third parameters are colors with their ratio in percent. The result will be a blended
color with the given shares in your desired color space.

If you want to learn more about this specific feature, here’s the official
spec https://drafts.csswg.org/css-color-5/

You can try it out using Firefox nightly or Safari beta: https://caniuse.com/?search=color-mix

Cascade layers

Last but not least: Cascade layers, of course. I mention them now because they have landed cross
browsers, and you’ve probably already seen them in action (e.g., when using Tailwind).

The basic syntax is this:


@layer name {
/* rules */
}
So this would be a valid cascade layer:
@layer utilities {
.padding-sm {
padding: .5rem;
}

.padding-lg {
padding: .8rem;
}
}

(Example from MDN)

Layers are a complex topic. If you want to learn more about them, read the MDN doc (I will link it
below). However, I’ll give you the TLDR; You may define multiple layers. The order in which they
are defined is important. So, the rules in the last defined layer are the most important. Therefore,
they take precedence. This can help you to solve specificity issues.

Read on: https://developer.mozilla.org/en-US/docs/Web/CSS/@layer

That’s all, folks!

Thank you for reading; follow me to stay up to date on web platform features!
Exciting new CSS features in 2022

All major browsers will get many new features in the coming months.

Photo by Callum Hill on Unsplash


All major browsers have agreed on a specific set of features to implement in 2022. The progress of
the so-called “Interop 2022” can be tracked here: https://wpt.fyi/interop-2022?stable. I will tell
you my most anticipated features that will land during Interop 2022.

A new HTML Tag: The dialog Element

I have already written an article on it; you can find it here:

You can finally make use of the HTML dialog element


New safari versions support it now!
towardsdev.com

TLDR; we have a new feature that will help all of us. In most projects, we have to implement a
Modal. Usually, we will use a div and add some open/close logic to it. This has become such a
common pattern that we’ve got a new Element for it: Dialog. See the browser support
here: https://caniuse.com/?search=dialog. Adaption is really good now; keep in mind that Safari
only supports it since version 15.4.

New Viewport Units


If you’ve ever written a cross-platform mobile web app, then you know the struggle. What’s the
height of the users’ actual viewport? Disappearing address bars, software keyboards, and other
weird behaviors (safe zones…)left us in despair. But fear not, my fellow developer. There’s hope, at
last! Behold the new Viewport Units: dvh, lvh, and svh.
https://twitter.com/jensimmons/status/1499441043930062854
The image is self-explanatory. I assume dvh will be a life-saver. Unfortunately, it didn’t land in
Chrome yet: https://caniuse.com/?search=dvh

CSS Subgrids

I love how the grid display type gets more and more attention. It has always been in the shadows
of flexboxes. However, it can be much more powerful when used right.

When doing reviews, I get asked this a lot: “When to use flex vs. grid?”
I have already answered this question in another article about the RAM Layout Pattern. TLDR;
Flexboxes are for one-dimensional layouts; grids are for 2-dimensional layouts. Sometimes it can
be that simple!

RAM — CSS Layout Pattern


Learn about the RAM Layout Pattern.
towardsdev.com

Two things are still notoriously hard to solve using CSS grids.
The first one is masonry grids. Unfortunately, it’s not really possible to do them with CSS-only
solutions, and it’s probably not going to change soon.

The second one is dealing with subgrids, so a grid within a grid. Consider the following example
(original example by web.dev)

If it looks like that on your browser, then you’re probably using Chrome or Safari:

The web.dev Codepen without CSS subgrid support

It should look like that when using a Browser with CSS subgrid support (like Firefox)
The web.dev Codepen with CSS subgrid support

As you can see, this can be a life saver! Having individual columns that share the same height for
their layout is still hard to do in CSS. Here’s the current browser
support: https://caniuse.com/?search=subgrid

CSS Level 5 Color Functions

The feature you’ll learn about now might not seem as helpful as the other features I have shown
you. But I can certainly find some use cases for this one, too!

There are at least two new color functions added to CSS this year, the one I anticipate the most
ist color-mix. Here’s the syntax:
div {
background-color: color-mix(in hsl, red 50%, yellow 50%);
}

color-mix expects the first parameter to be a color space (such as RGB, hsl, lch, or oklch). The
second and third parameters are colors with their ratio in percent. The result will be a blended
color with the given shares in your desired color space.

If you want to learn more about this specific feature, here’s the official
spec https://drafts.csswg.org/css-color-5/

You can try it out using Firefox nightly or Safari beta: https://caniuse.com/?search=color-mix

Cascade layers

Last but not least: Cascade layers, of course. I mention them now because they have landed cross
browsers, and you’ve probably already seen them in action (e.g., when using Tailwind).

The basic syntax is this:


@layer name {
/* rules */
}
So this would be a valid cascade layer:
@layer utilities {
.padding-sm {
padding: .5rem;
}

.padding-lg {
padding: .8rem;
}
}

(Example from MDN)

Layers are a complex topic. If you want to learn more about them, read the MDN doc (I will link it
below). However, I’ll give you the TLDR; You may define multiple layers. The order in which they
are defined is important. So, the rules in the last defined layer are the most important. Therefore,
they take precedence. This can help you to solve specificity issues.

Read on: https://developer.mozilla.org/en-US/docs/Web/CSS/@layer

That’s all, folks!

Thank you for reading; follow me to stay up to date on web platform features!
@Configuration
@EnableWebSocketMessageBroker
class WebSocketConfig: WebSocketMessageBrokerConfigurer {
override fun registerStompEndpoints(registry: StompEndpointRegistry) {
registry.addEndpoint("/stomp").setAllowedOrigins("*")
}

override fun configureMessageBroker(registry: MessageBrokerRegistry) {


registry.enableSimpleBroker("/topic")
registry.setApplicationDestinationPrefixes("/app")
}
}

Step 3: Create API Endpoint for unidirectional real-time communication

The API endpoint provides a way for microservices (backend) to send messages to the web
application (frontend). As the messages only require a one-way flow (backend → WebSocket
Server → frontend), using APIs will be a good communication medium between microservices
(backend → Websocket server).

@RestController
@RequestMapping("/api/notification")
class NotificationController(private val template: SimpMessagingTemplate) {
@PostMapping
fun newMessage(@RequestBody request: NewMessageRequest) {
template.convertAndSend(request.topic, request.message)
}
}
view raww

The code above creates a REST controller with a POST request endpoint that takes in a request
body “NewMessageRequest” where the topic is the STOMP destination that the client (frontend)
subscribes to and message is the actual message in String format. With this, you can now send a
message via API to the WebSocket server, which will then be forwarded to the web application
(frontend) via WebSocket.

Step 4: Configure Redis Pub/Sub for bidirectional real-time communication (Optional)

Note: Depending on your use case, you can omit this step if you do not require bidirectional real-
time communication between the web application (frontend) and microservices (backend).

Communication via APIs between microservices (backend and Websocket server) will not be
optimal for real-time communications as compared to using a publish-subscribe messaging
pattern. Hence, for bidirectional communication, we will make use of a publish-subscribe
messaging pattern.

There are many ways to implement a publish-subscribe messaging pattern but for demonstration
and simplicity’s sake, we will use Redis Pub/Sub.

To get started, run a Redis server locally using docker (docker run — name redis-server -p

6379:6379 -d redis) and add the following configuration to the application.yml file for the
WebSocket server to connect to the Redis server.
# application.yml
spring.redis:
host: localhost
port: 6379

Next, create a configuration file, RedisConfig.kt, and add the configuration below. Essentially, we
are configuring a ReactiveRedisTemplate that communicates with the Redis server and is configured
to serialize and deserialize messages as String.

@Configuration
class RedisConfig {
@Bean
fun reactiveRedisTemplate(factory: LettuceConnectionFactory): ReactiveRedisTemplate<String, String> {
val serializer = Jackson2JsonRedisSerializer(String::class.java)
val builder = RedisSerializationContext.newSerializationContext<String, String>(StringRedisSerializer())
val context = builder.value(serializer).build()
return ReactiveRedisTemplate(factory, context)
}
}
Following this, create a RedisService that contains logic for subscribing and publishing to the Redis
server. In the example below, we subscribed to an inbound channel
topic GREETING_CHANNEL_INBOUND which listens for incoming messages from other microservices
(backend) and forwards all messages received to the STOMP destination /topic/greetings.
@Service
class RedisService(
private val reactiveRedisTemplate: ReactiveRedisTemplate<String, String>,
private val websocketTemplate: SimpMessagingTemplate
) {
fun publish(topic: String, message: String) {
reactiveRedisTemplate.convertAndSend(topic, message).subscribe()
}

fun subscribe(channelTopic: String, destination: String) {


reactiveRedisTemplate.listenTo(ChannelTopic.of(channelTopic))
.map(ReactiveSubscription.Message<String, String>::getMessage)
.subscribe { message ->
websocketTemplate.convertAndSend(destination, message)
}
}

@PostConstruct
fun subscribe() {
subscribe("GREETING_CHANNEL_INBOUND", "/topic/greetings")
}
}

Lastly, create a Controller that processes messages from the web application (frontend) which are
sent to the WebSocket server with the prefix /app. In the example below, messages sent
to /app/greet will be forwarded (published) to an outbound channel
topic GREETING_CHANNEL_OUTBOUND which will then be processed by any microservice (backend) that is
listening to that channel.

Lastly, create a Controller that processes messages from the web application (frontend) which are
sent to the WebSocket server with the prefix /app. In the example below, messages sent
to /app/greet will be forwarded (published) to an outbound channel
topic GREETING_CHANNEL_OUTBOUND which will then be processed by any microservice (backend) that is
listening to that channel.

With that, we have set up the WebSocket server to act as a middleware (or proxy) that
communicates with the web application (frontend) via WebSocket and communicates with the
microservices (backend) via Redis Pub/Sub.
Testing WebSocket Connection

Using an open-source websocket client debugger tool built by jiangxy as a mock web application
(frontend), we can test the WebSocket server we built above.

Test #1: Send message from backend to frontend (via API)

Spin up the WebSocket server, and connect to the WebSocket


server ws://localhost:8080/stomp over STOMP protocol using the WebSocket debugger tool. Once
connected, configure the WebSocket debugger tool to subscribe to the topic /topic/toast.

Next, send an HTTP POST request to the WebSocket server using the command below:
curl -X POST -d '{"topic": "/topic/toast", "message": "testing API endpoint" }' -H 'Content-
Type: application/json' localhost:8080/api/notification

The WebSocket debugger tool should have the output shown below:
Screenshot of WebSocket debugger tool’s output for sending a message from backend via API

This shows that the WebSocket server has successfully received the message via API and
forwarded the message to the web application (frontend) via WebSocket.

Test #2: Send message from backend to frontend (via Pub/Sub)

Spin up the WebSocket server, and connect to the WebSocket


server ws://localhost:8080/stomp over STOMP protocol using the WebSocket debugger tool. Once
connected, configure the WebSocket debugger tool to subscribe to the
topic /topic/greetings (defined above).
Using Redis CLI, publish a message to the channel topic GREETING_CHANNEL_INBOUND(defined above)
using the command PUBLISH GREETING_CHANNEL_INBOUND “\"Test Message from Backend PubSub\"”.

Note that the extra \” is required as the WebSocket server is configured to receive String
messages. The WebSocket debugger tool should receive the message as shown below

Screenshot of WebSocket debugger tool’s output for sending a message from backend via Redis PubSub

This shows that the WebSocket server has successfully received the message via Redis Pub/Sub
and forwarded the message to the web application (frontend) via WebSocket.

Test #3: Send message from frontend to backend (via Pub/Sub)


Spin up the WebSocket server, and connect to the WebSocket
server ws://localhost:8080/stomp over STOMP protocol using the WebSocket debugger tool. Once
connected, using Redis CLI, subscribe to channel topic GREETING_CHANNEL_OUTBOUND (defined above)
using the command SUBSCRIBE GREETING_CHANNEL_OUTBOUND. Send a message to STOMP
destination /app/greet using the WebSocket debugger tool, and you should observe the following:

Output of Redis CLI Subscribe Command

This shows that the WebSocket server has successfully received the message via WebSocket and
forwarded the message to the microservices (backend) via Redis Pub/Sub.

Summary
In summary, we have run through a possible design of a WebSocket server in a microservice
architecture. Having a WebSocket server greatly aligns with the “Single Responsibility Principle”
of microservices, where it manages all WebSocket connections to the web application (frontend) as
well as handles real-time communications between the web application (frontend) and other
microservices (backend).

That’s it! I hope you learned something new from this article. Stay tuned for the next one, where
we will look into scaling the WebSocket server.

If you like this article, please follow me for more :).

Thank you for reading until the end. Happy learning!

What is the Difference Between map() and forEach() in


JavaScript?
One of the most common data structure in JavaScript is array, and we often need to process array
elements, and to iterate through our arrays
JavaScript provides us the most loved functions that might be map and forEach.
They both introduced in ES5.

They almost look identical but there are some difference between them, but before jumping, let us
know what are map() and forEach().

forEach()

This method allows you to execute a callback function by iterating through each element of an
array. Always remember that it doesn’t return anything and if you try to get the value it will
be undefined.

forEach: Not chainable

map()
It is almost identical to forEach method and executes a callback function to loop over an array
easily. But the difference is it returns a new array always, so that means it also doesn’t change our
source array. It is, therefore, an immutable operation.

A great thing about the map method is that it’s also chainable, meaning you can call a number of
map operations in a row.

map: chainable

Differences and Summary

Both methods help us to iterate through our array, and the choice between map and forEach will
depend on your use case.
map vs forEach

If you’re planning to alter the array, we should use map the function, since it doesn’t change the
original array and returns a new array. But if you won’t need the returned array, and just want to
loop through all elements of an array, use the forEach or even a for loop.
Apart from this, this functions are almost identical

That is all from my end, if there are more differences or any mistake in the article, please
share your views in the comment section and thanks for reading it. Check out my other
article on https://medium.com/@aayushtibra1997

Using Node.js to Make Your Own Server


Super basic guide to using Node.js + Express
The information covered in this post is very simplified because I am in the process of getting to
know these concepts/modules/framework in detail. It’s my second project in which I’m making
my own server, and just thought I’d write down the process for my own convenience next time.

What is Node.js?

There is much to be said and explained about Node.js, but I won’t dive too deep into it. Node.js is
an “asynchronous event-driven JavaScript runtime” which basically allows you to use
JavaScript in a non-web browser setting. It’s not a framework nor a coding language. It is
used most commonly for server-side programming.

Starting Node.js

To use Node.js, you must install it first. Then, in your project folder terminal, type npm init, which
should look like this:
npm init

If you don’t want to configure the above descriptions, just run npm init -y. Then, a file
called package.json should be created in your project folder. It contains information about
dependencies and basic information about the project. It is in JSON format (= JavaScript
Object Notation), which is expressing an object in a {key:value} format.

Note: I’ve created a separate index.js file in my project folder, which will be the “entry point”
of my server-side code.

npm packages

Just like Flutter has pub, Node.js has npm (Node Package Manager/Modules), an open
source library for Node.js. npm packages are all declared in the package.json file. To
install/uninstall a package:
npm i packageName --save // install
npm uninstall packageName --save // uninstall

I recommend using the save keyword because it automatically registers it in package.json, which
is convenient when someone else (or you) decides to use the source code. There are a lot of popular
and commonly used packages, and for my project, I will start off with Express.
Express

I feel like Express is the most popular Node.js framework out there as it allows you to handle
HTTP requests easily and flexibly. Install it by typing the following in the project terminal:
npm i express --save

To get started, you need the following code:


const express = require("express"),
app = express();app.listen(3000, _ => console.log("connected to server"));

require() in Node.js is basically importing a module in a separate file and returns the exported
object. I think of it as using the keyword as in Dart when you import a package.
import 'package:http/http.dart' as http;

The app returned by express() is a JavaScript function, which is passed to Node’s HTTP
servers as a callback to handle requests. Also, to omit the constant use of const when declaring
variables, you can just continue the declaration after a comma.

If you don’t have the app.listen() function, your node will not start. It is used to bind and listen
to connections on the specified host and port; if it is not specified, the OS will use an arbitrary
unused port. It has a couple of optional parameters, of which I use a callback.
In order to start your sever, you type in the terminal node entryPoint and you can exit by ctrl+
c.

app. methods

app. has a lot of methods. I’ll be going over app.use(), app.get(), and app.post().

app.use()

app.use() mounts the middleware specified (it can be at the application level, or route level if
you specify a path — this applies to app.get() and app.post()). Unlike app.get() or app.post(),
Middleware functions have access to (1) request (2) response and (3) the next middleware
function. I’ll write about middleware functions another day.
app.use(express.urlencoded({ extended: false }));
app.use(express.json());

The app.use(express.urlencoded({ extended: false })) returns a middleware that only


parses urlencoded bodies and only looks at requests where the content-type
matches “content-type”: “application/x-www-form-urlencoded; charset=utf-8”. If the
body is in JSON format, you would use app.use(express.json()). I would need these
middleware at the application level, thus no path is specified.

The extended: false means that the data will be parsed with the querystring library,
and extended: true means that it will be parsed with qs library. qs library allows you to created
a nested object from your query string, whereas querystring library does not. Also, qs will NOT
filter out “?” whereas querystring library will. Unless you have full, complex objects as queries,
extended: false is probably better.

app.get()

app.get() takes a path and a callback as its required parameters. It “routes” the HTTP get
requests to the specified path, with the specified callback function(s). Let’s say I want to get my
usersData.
Note: if you put just { data }, it becomes {data: data} — super convenient.

Response

Express’ response has several methods, some of which are: res.download() — for prompting a
file download, res.end() — ending the response without any data, res.json() — sends a JSON
response, res.redirect() — redirecting to a specified path, res.send() — sends various
responses. For res.send(), if the parameter is a String, the content-type will be “text/html”; if the
parameter is an Array or an Object, the content-type will be JSON.

app.post()

app.post() is basically the same as app.get() except it is used for POST requests. Since you can send
a body with post requests, you’ll be able to view the body by using req.body.
app.post()

express.Router()

express.Router() allows you to make a router as a module and it’s better for code separation. Make
a separate file (or directory)
auth.js

All you need to do in your index.js file is load the module and app.use() it.
const express = require("express"),
app = express(),
auth = require("./lib/server/auth.js");app.use("/auth", auth);

You might also like