Node Collate
Node Collate
In this article, I am sharing a list of node modules that will help you in your node backend project.
You can install these modules using npm.
Node.js’ package ecosystem (NPM), is the largest ecosystem of open source libraries in the world.
To install any node module run the below command in your terminal:
npm install module_name
Here is a list of node modules that will extend the capability of your node.js application.
2. Forever: A simple CLI tool for ensuring that a given node script runs continuously (i.e.
forever).
forever start server.js
3. Nodemon: It is a simple monitor script for use during the development of a node.js app, It will
watch the files in the directory in which nodemon was started, and if any files change, nodemon
will automatically restart your node application.
nodemon server.js
4. Helmet: Helmet middleware is a toolkit that helps you to secure your Express apps by setting
various HTTP headers.
const express = require('express');const helmet = require("helmet");const app =
express();app.use(helmet());// or You can use individual
headersapp.use(helmet.contentSecurityPolicy());app.use(helmet.crossOriginEmbedderPolicy());ap
p.use(helmet.crossOriginOpenerPolicy());app.use(helmet.crossOriginResourcePolicy());app.use(h
elmet.dnsPrefetchControl());app.use(helmet.expectCt());app.use(helmet.frameguard()); //
Prevent Click Jacking Attackapp.use(helmet.hidePoweredBy()); // Disable tech-stack from
headerapp.use(helmet.hsts()); // Set strict transport
securityapp.use(helmet.ieNoOpen());app.use(helmet.noSniff());app.use(helmet.originAgentCluste
r());app.use(helmet.permittedCrossDomainPolicies());app.use(helmet.referrerPolicy());app.use(
helmet.xssFilter());// Prevent Cross-site scripting attack
5. Cors: CORS is shorthand for Cross-Origin Resource Sharing. It is a mechanism to allow or
restrict requested resources on a web server depends on where the HTTP request was initiated.
This policy is used to secure a certain web server from access by other websites or domains. For
example, only the allowed domains will be able to access hosted files in a server such as a
stylesheet, image, or script.
const express = require('express')
const cors = require('cors')
const app = express()//Simple Usage (Enable All CORS Requests)
app.use(cors())//Enable CORS for a Single Route
app.get('/products/:id', cors(), function (req, res, next) {
res.json({msg: 'This is CORS-enabled for a Single Route'})
})
6. Moment: A lightweight JavaScript date library for parsing, validating, manipulating, and
formatting dates.
var now = "04/09/2013 15:00:00";
var then = "02/09/2013 14:20:30";var ms = moment(now, "DD/MM/YYYY
HH:mm:ss").diff(moment(then, "DD/MM/YYYY HH:mm:ss"));var d = moment.duration(ms);
var s = d.format("hh:mm:ss");
7. Morgan: HTTP request logger middleware for node.js. Including the preset tiny as an
argument to morgan() will use its built-in method, identify the URL, declare a status, and the
request’s response time in milliseconds.
const express = require('express');const morgan = require('morgan');const app =
express();app.use(morgan('tiny'));
9. Async: Async is a utility module that provides straight-forward, powerful functions for working
with asynchronous JavaScript.
Many helper methods exist in Async that can be used in different situations, like series, parallel,
waterfall, etc. Each function has a specific use case, so take some time to learn which one will
help in which situations.
async.series([
function(callback) {
// do some stuff ...
callback(null, 'one');
},
function(callback) {
// do some more stuff ...
callback(null, 'two');
}
],
// optional callback
function(err, results) {
// results is now equal to ['one', 'two']
}
);//============================================================async.parallel({
one: function(callback) {
...
},
two: function(callback) {
...
},
...
something_else: function(callback) {
...
}
},
// optional callback
function(err, results) {
// 'results' is now equal to: {one: 1, two: 2, ..., something_else: some_value}
}
);//============================================================async.waterfall([
function(callback) {
callback(null, 'one', 'two');
},
function(arg1, arg2, callback) {
// arg1 now equals 'one' and arg2 now equals 'two'
callback(null, 'three');
},
function(arg1, callback) {
// arg1 now equals 'three'
callback(null, 'done');
}
], function(err, result) {
// result now equals 'done'
});
10. Mongoose: It is a MongoDB ODM (object data modeling) tool designed to work in an
asynchronous environment. this package enables you to easily connect to a MongoDB database
using Node.js.
const mongoose = require('mongoose');const connectDB = async () => {
mongoose
.connect('mongodb://localhost:27017/playground', {
useCreateIndex: true,
useNewUrlParser: true,
useUnifiedTopology: true,
useFindAndModify: false
})
.then(() => console.log('Connected Successfully'))
.catch((err) => console.error('Not Connected'));
} module.exports = connectDB;
11. Mysql: The mysql package enables you to easily connect to a MySQL database using Node.js.
var mysql = require('mysql');var connection = mysql.createConnection({
host : 'localhost',
database : 'dbname',
user : 'username',
password : 'password',
});
connection.connect(function(err) {
if (err) {
console.error('Error connecting: ' + err.stack);
return;
}
results.forEach(result => {
console.log(result);
});
});
connection.end();
12. Nodemailer: This module enables e-mail sending from Node.js applications.
"use strict";
const nodemailer = require("nodemailer");async function main() {
let testAccount = await nodemailer.createTestAccount();
let transporter = nodemailer.createTransport({
host: "smtp.ethereal.email",
port: 587,
secure: false, // true for 465, false for other ports
auth: {
user: testAccount.user, // generated ethereal user
pass: testAccount.pass, // generated ethereal password
},
}); let info = await transporter.sendMail({
from: '"Fred Foo " <[email protected]>', // sender address
to: "[email protected], [email protected]", // list of receivers
subject: "Hello ✔", // Subject line
text: "Hello world?", // plain text body
html: "<b>Hello world?</b>", // html body
});console.log("Message sent: %s", info.messageId);
// Message sent: <[email protected]>
console.log("Preview URL: %s", nodemailer.getTestMessageUrl(info));
}main().catch(console.error);
13. Bcrypt: The bcrypt NPM package is a JavaScript implementation of the bcrypt
password hashing function that allows you to easily create a hash out of a password
string.
15. Response-time: This module creates a middleware that records the response time for
requests in HTTP servers. The “response time” is defined here as the elapsed time from when a
request enters this middleware to when the headers are written out to the client.
const express = require('express');const responseTime = require('response-time');const app =
express()app.use(responseTime())app.get('/', function(req, res) {
res.send('hello, world!')
})
16. connect-busboy: busboy is a streaming parser for HTML form data for node.js.
const express = require('express');const busboy = require('connect-busboy');const path =
require('path');const fs = require('fs-extra');const app = express();app.use(busboy({
highWaterMark: 2 * 1024 * 1024, // Set 2MiB buffer
}));const uploadPath = path.join(__dirname,
fs.ensureDir(uploadPath);app.route('/upload').post((req, res, next) =>
{req.pipe(req.busboy); // Pipe it trough busboyreq.busboy.on('file', (fieldname, file,
filename) => {
console.log(`Upload of '${filename}' started`);const fstream =
fs.createWriteStream(path.join(uploadPath, filename));// Pipe it trough
file.pipe(fstream);
fstream.on('close', () => {
console.log(`Upload of '${filename}' finished`);
res.redirect('back');
});
});
});const server = app.listen(3200, function() {
console.log(`Listening on port ${server.address().port}`);
});
17. Google-auth-library: This is Google’s officially supported node. js client library for using
OAuth 2.0 authorization and authentication with Google APIs.
const { OAuth2Client } = require("google-auth-library");async function
googleSignInUser(request, response) {
const client = new OAuth2Client(process.env.GOOGLE_CLIENT_ID);
const { idToken } = request.body;client
.verifyIdToken({ idToken, audience: process.env.GOOGLE_CLIENT_ID })
.then((res) => {
const { email_verified, name, email } = res.payload;
if (email_verified) {
User.findOne({ email }).exec((err, user) => {
if (user) {
const { _id, email, fullName } = user;const token = jwt.sign({ email: email },
process.env.SECRET_KEY, {
expiresIn: process.env.EXPIRE_IN,
});return response.status(200).json({
accessToken: token,
user: { _id, email, fullName },
});
} else {
const password = email + process.env.SECRET_KEY;bcrypt.hash(password, 12, async
(err, passwordHash) => {
if (err) {
response.status(500).send("Couldn't hash the password");
} else if (passwordHash) {
return User.create({
email: email,
fullName: name,
hash: passwordHash,
}).then((data) => {
const { _id, email, fullName } = data;
const token = jwt.sign(
{ email: email },
process.env.SECRET_KEY,
{ expiresIn: process.env.EXPIRE_IN }
);response.status(200).json({
accessToken: token,
user: { _id, email, fullName },
});
});
}
});
}
});
} else {
return res.status(400).json({
error: "Google login failed. Try again",
});
}
});
}
18. Redis: Redis is a super fast and efficient in-memory, key-value cache and store. It’s also
known as a data structure server, as the keys can contain strings, lists, sets, hashes, and other data
structures.
const redis = require("redis");const client = createClient();client.on('error', (err) =>
console.log('Redis Client Error', err));await client.connect();await client.set('key',
'value');
const value = await client.get('key');
19. Joi: The most powerful schema description language and data validator for JavaScript.
const Joi = require('joi'); app.post('/blog', async (req, res, next) => {
const { body } = req; const blogSchema = Joi.object().keys({
title: Joi.string().required
description: Joi.string().required(),
authorId: Joi.number().required()
}); const result = Joi.validate(body, blogShema);
const { value, error } = result;
const valid = error == null; if (!valid) {
res.status(422).json({
message: 'Invalid request',
data: body
})
} else {
const createdPost = await api.createPost(data);
res.json({ message: 'Resource created', data: createdPost })
}
});
20. Winston: Winston, is one of the best logging middleware. Logging is a process of recording
information generated by application activities into log files. Messages saved in the log file are
called logs. A log is a single instance recorded in the log file.
A log is the first place to look as a programmer, to track down errors and flow of events, especially
from a server. A log tells you what happens when an app is running and interacting with your
users. A great use case for logging would be if, for example, you have a bug in your system, and you
want to understand the steps that led up to its occurrence. Let's take an example of the custom
logger.js
const { createLogger, format, transports, config } = require('winston');
levels: config.syslog.levels,
format: combine(
timestamp({
format: 'YYYY-MM-DD HH:mm:ss'
}),
transports: [
new transports.File({ filename: 'users.log' })
]
});
const transactionLogger = createLogger({
transports: [
new transports.File({ filename: 'transaction.log' })
]
});
module.exports = {
usersLogger: usersLogger,
transactionLogger: transactionLogger
};
23. loadtest: Runs a load test on the selected HTTP or WebSockets URL. The API allows for easy
integration in your own tests.
$ loadtest [-n requests] [-c concurrency] [-k] URL
$ loadtest -n 100000 -c 10000 http://localhost:9090/
24. i18next: i18next is a very popular internationalization framework for browsers or any other
javascript environment (eg. Node.js, Deno).
const http = require('http');const path = require('path');const { I18n } = require('i18n');
app.listen(3000, '127.0.0.1')
25. jsonwebtoken: JWT, or JSON Web Token, is an open standard used to share security
information between two parties — a client and a server.
app.post("/login", async (req, res) => {
try {
const {
email,
password
} = req.body;// Validate user input
if (!(email && password)) {
res.status(400).send("All input is required");
}
// Validate if user exist in our database
const user = await User.findOne({
email
});if (user && (await bcrypt.compare(password, user.password))) {
// Create token
const token = jwt.sign({
user_id: user._id,
email
},
process.env.TOKEN_KEY, {
expiresIn: "2h",
}
);// save user token
user.token = token;// user
res.status(200).json(user);
}
res.status(400).send("Invalid Credentials");
} catch (err) {
console.log(err);
}
// Our register logic ends here
});
26. Cookie-parser: cookie-parser is a middleware that parses cookies attached to the client
request object. To use it, we will require it in our index. js file; this can be used the same way as we
use other middleware
const Express = require('express');
const app = Express();
const port = 80;const CookieParser = require('cookie-parser');
app.use(CookieParser());app.get("/send", (req, res) => {
res.cookie("loggedin", "true");
res.send("Cookie sent!");
});app.get("/read", (req, res) => {let response = "Not logged in!";if (req.cookies.loggedin
== "true") {
response = "Yup! You are logged in!";
}res.send(response);
});app.listen(port, () => {
console.log("Server running!");
});
27. Config: Node-config organizes hierarchical configurations for your app deployments.
npm install config
Create a config directory and add a config/default.json file to it. This will be the default config file
and will contain all your default environment variables.
{
"server": {
"host": "localhost",
"port": 8080,
}
}
28. Supertest: SuperTest is a Node. js library that helps developers test APIs. It extends
another library called superagent, a JavaScript HTTP client for Node. js and the browser.
Developers can use SuperTest as a standalone library or with JavaScript testing frameworks like
Mocha or Jest.
const request = require('supertest');
const app = require('/app');describe('Testing POSTS/shots endpoint', function() {
it('respond with valid HTTP status code and description and message', function(done) {
const response = await request(app).post('/shots').send({
title: 'How to write a shot',
body: "Access the Edpresso tutorial"
});expect(response.status).toBe(200);
expect(response.body.status).toBe('success');
expect(response.body.message).toBe('Shot Saved Successfully.');
});
});
29.Multer: Multer is a node.js middleware for handling, which is primarily used for uploading
files. It is written on top of the busboy for maximum efficiency.
// upload.js
const multer = require("multer");
const path = require("path");const storage = multer.diskStorage({
destination: (req, file, cb) => {
cb(null, "./public/uploads/images/");
},
filename: (req, file, cb) => {
cb(null, Date.now() + file.originalname);
},
});const fileFilter = (req, file, cb) => {
if (file.mimetype === "image/jpeg" || file.mimetype === 'image/jpg' || file.mimetype ===
"image/png") {
cb(null, true);
} else {
cb(null, false);
}
};module.exports = multer({
storage: storage,
limits: {
fileSize: 1024 * 1024 * 5,
},
fileFilter: fileFilter,
});// Use this middleware
app.post('/uploadfile', upload.single('myFile'), (req, res, next) => {
const file = req.file
if (!file) {
const error = new Error('Please upload a file')
error.httpStatusCode = 400
return next(error)
}
res.send(file)})
Hope You Like This Article And It Will Help You In Your Upcoming Projects.
Happy Learning!!!!
Converting audio files into video files is an everyday use case in the current age of content
production.
While there are many ways to do it via some custom websites, we programmers don’t follow that
easy, simple path, right?
Today, I will show you how to convert an audio file into a video file in NodeJS.
We will use the power of FFmpeg. In their documentation, they identify themselves as:
This is not something specific to NodeJS. Instead, it’s OS-level documentation that you can install
on your machine by running the following commands on Linux
sudo apt updatesudo apt install ffmpeg
If you want to learn how to use FFMpeg in Docker, you can check the following article.
The problem is accessing the FFmpeg directly from NodeJS can be tricky. However, several
libraries create an abstraction on top of the FFmpeg.
Today we will use the light version of ffcreator which is called ffcreatorlite
Install Dependencies
Then add an audio file to the project. You will probably also want a cover image for your generated
video, right? So bring that in too.
|- src
|----index.ts
|----source.mp3 // your audio file
|----cover.png // your cover image
const CANVAS_WIDTH = 1246; // play with the dimensions. I am creating a 16:9 canvas for youtube videos
const CANVAS_HEIGHT = 700;
const VIDEO_DURATION = 30;
Now let’s combine these functions to create our video file from the audio file.
const
generateVideoFromAudioFile
= async ():
Promise<string> => {
return new Promise((resolve, reject) => {
creator.start();
creator.closeLog();
creator.on('start', () => {
console.log(`FFCreator start`);
});
The syntax is a little weird because of the structure of the ffcreatorlite library, but what this
essentially does:
1. Creates a creator instance2. Adds a audio3. Adds a cover image4. Starts the process and
wait's for it's completion5. after completing returns the generated video file path
So, now you can run the function like the following:
await generateVideoFromAudioFile();
And after everything is finished, you will see a generated video file randomid.mp4 inside your
project, which you can use any way you like.
Final Thoughts
I have shown a minimal use case that is possible to do with this awesome library. There are a lot of
things that you can do with ffcreatorlite and ffcreator library like adding multiple images with
transition and everything.
1. Create A Droplet
In digital ocean, click on the create button and select the droplet option
- Choose an image (I will go with Ubuntu)
- Choose a plan based on your project needs
- Choose a datacenter region
- In Authentication Part, click on New SSH Key and continue with second step of this article to
create an SSH Key
2. Download putty.exe and puttygen.exe to generate a key, save settings and
connect to the server/droplet easily later on
https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
• Save private key as a .ppk file in the same folder with public key
• Copy public key and paste it in SSH Key Content area in digital ocean
• In droplet creating page, if you need, select additional options and finalize and create the
droplet.
• Click on the SSH->Auth tab and browse for your private key file (.ppk)
• Go back to the session tab and give a name to the session in Saved Sessions field and click
on save button.
In terminal, write the command below to create a user with name ‘newuser’. You can write any
name you want.
• usermod -aG sudo newuser (add the user to the sudo group)
• copy and paste your public key in form below without linebreaks
ssh-rsa your-public-key
• in putty click on session name you saved and click on load button
• now you have logged in as a new user without entering any password
• sudo systemctl reload sshd (to reload ssh with this command)
• cd your-project-directory
• npm install
• project has been running but on ip address. In order to run it on a domain name, you need to
arrange dns settings to redirect that domain to digital ocean servers
• go to your domain provider site and open up dns settings and change them like this one
• In digital ocean, go to networking
• in create new record page and A tab, enter @ in hostname field and select your droplet
in will direct to field and create record
• in same page, go to CNAME tab, enter www in hostname field, enter @ in is an alias
of field and create record
• now you check the domain name with port stated in project server.js/index.js file
• in order to remove port part and use domain name only, apply the following steps
• pm2 stop server.js/index.js
Now it is running in the domain name you have added without port number specified.
In this tutorial, we’ll learn how to build an authentication system for a Nodejs & Express
application using JWT.
We’ll be working on the project of this tutorial Build an API using Node, Express, MongoDB, and
Docker . You can find the code source for this tutorial here.
Authorization is the process of verifying what data the user can have access to.
And authorization only occurs when you’ve been authenticated. Then, the system will grant you
access to the files you need.
You can easily generate a new value for this secret key online here.
Let’s create the User model. But first, we need to define a type for this model.
// src/types/user.ts
import { Document } from "mongoose";
The User model is created. We can go and start writing the Login and Register controllers.
Registration
Go to the controllers directory and create a new directory users which will contain a
new index.ts file.
if (oldUser) {
return res.status(400).send("User Already Exist. Please Login");
}
user.save().then((doc) => {
// Generating Access and refresh token
const token = jwt.sign(
{ user_id: doc._id, username: username },
process.env.JWT_SECRET_KEY,
{
expiresIn: "5min",
}
);
refreshTokens.push(refreshToken);
return res.status(201).json({
user: doc,
token: token,
refresh: refreshToken,
});
});
export {registerUser};
• Send responses
When the token expires, the intuitive way to claim a new access token will be to log in again. But
this is not effective at all for the experience of possible users. Then instead of login in again, the
client can claim a new access token by making a request with the refresh token obtained at login or
registration. We’ll write the routes for this later.
Now, let’s add this controller to the routes and register the new routes in our application.
// src/routes/index.ts
// Menu Routes
menuRoutes.get("/menu", getMenus);
menuRoutes.post("/menu", addMenu);
menuRoutes.put("/menu/:id", updateMenu);
menuRoutes.delete("/menu/:id", deleteMenu);
menuRoutes.get("/menu/:id", retrieveMenu);
// User Routes
userRoutes.post("/user/register", registerUser);
And inside the app.ts file, let's use the new route.
// src/app.ts
app.use(userRoutes);
...
Login
Inside the index.ts file of users controllers, let's write the login function.
// src/controllers/users/index.ts
refreshTokens.push(refreshToken);
// user
return res.status(200).json({
user: user,
token: token,
refresh: refreshToken,
});
}
...
userRoutes.post("/user/login", loginUser);
...
Ah great. The Login endpoint is done, the registering endpoint also is done. But the resources are
not protected. You can still access them and because we need to write a middleware.
A middleware is a function that is used to that acts as a bridge between a request and a function to
execute the requests.
Create a new directory named middleware inside src and create a file index.ts.
if (authHeader) {
const [header, token] = authHeader.split(" ");
req.user = user;
next();
});
}
return res.sendStatus(401);
};
• Verifying the token and then creating a new key with user as value. req.user = user
app.use(userRoutes);
app.use(authenticateJWT);
app.use(menuRoutes);
...
Did you notice something? The middleware is placed after the userRoutes and before menuRoutes.
Well, going like this, node & express will understand that the userRoutes are not protected and also
that all the routes after the authenticateJWT will require an access token.
To test this, make a GET request to http://localhost:4000/menus without authorization header.
You'll receive a 401 error. Then use the access token from your previous login and add it to the
authorization header. You should retrieve the menus.
Refresh token
if (!refreshTokens.includes(refresh)) {
return res.status(403).send("Refresh Invalid. Please login.");
}
jwt.verify(
refresh,
process.env.JWT_SECRET_KEY,
(err: Error, user: IUser) => {
if (err) {
return res.sendStatus(403);
}
const token = jwt.sign(
{ user_id: user._id, username: user.username },
")a(s3eihu+iir-_3@##ha$r$d4p5%!%e1==#b5jwif)z&kmm@7",
{
expiresIn: "5min",
}
);
return res.status(201).send({
token: token,
});
}
);
• Making sure that the refresh token exists in the memory of the server
• And finally verifying the refresh token then sending a new access token.
Add this new controller to the userRoutes.
// src/routes/index.ts
...
userRoutes.post("/user/refresh", retrieveToken);
...
But there is a problem. If the refresh token is stolen from the user, someone can use it to generate
as many new tokens as they’d like. Let’s invalidate this.
// src/controllers/users/index.ts
...
const logoutUser = async (
req: Request,
res: Response
): Promise<e.Response<any, Record<string, any>>> => {
try {
const { refresh } = req.body;
refreshTokens = refreshTokens.filter((token) => refresh !== token);
import {
loginUser,
logoutUser,
registerUser,
retrieveToken,
} from "../controllers/users";
...
userRoutes.post("user/logout", logoutUser);
...
Conclusion
In this article, we’ve learned how to build an authentication system for our Node & Express
application using JWT.
And as every article can be made better so your suggestion or questions are welcome in the
comment section.
Write better code using these five simple yet unusual JavaScript tips.
Photo by David Nicolai on Unsplash
Too many articles about JavaScript tips only cover the basics of Array functions or obvious
improvements to your code. This article will go more in-depth, helping you improve the code
you’re writing daily.
1. Wait for … anything
Sometimes, you want to wait for something to happen. And while this task can become complex
(e.g., using a non-blocking loop), there’s a simple solution for most of your waiting problems:
Promises.
This promise will resolve after about 1 second. You can also store it in a variable and use await to
block for a second (beware of potential UX issues). And while you can find a use case for the
snippet above, it implies a much more helpful trick.
You can use promises as semaphores: Sometimes, you want to execute an asynchronous, long-
running process. But a user could trigger this process again and again. So you want to ensure that
the running process has to finish before your users can start it again. Here’s how:
let processStatus = null;
function myProcess() {
if (processStatus) {
return;
} processStatus = new Promise(resolve => {
// Do some heavy lifting, for example
setTimeout(() => {
// pretending a long running action
resolve();
}, 5000);
})
.then(() => {
processStatus = null;
});
}
A user can only click this when there’s no active process. It’s helpful to avoid multiple fetches for
the same data.
I see many people using the forEach functions on JavaScript Arrays, but the majority of them
aren’t aware of its true power: Async loops.
const asyncArr = [
new Promise(resolve => setTimeout(resolve.bind(this, 1), 2000)),
new Promise(resolve => setTimeout(resolve.bind(this, 2), 500)),
new Promise(resolve => setTimeout(resolve.bind(this, 3), 5000)),
new Promise(resolve => setTimeout(resolve.bind(this, 4), 1000)),
];asyncArr.forEach(async (el) => {
const i = await el;
console.log(i);
});// logs: 2, 4, 1, 3
While this is also possible using a for-of-loop, it reads much more elegant using await. Here’s the
same example using a for-of-loop.
for (const el of asyncArr) {
el.then(console.log);
}
Don’t let the fact fool you that the for-of-loop is more concise. In this example, we don’t do any
computation with i. However, imagine doing more computation in the function body of then used
by the for-const-loop.
There are two issues you might hit with the forEach interface here:
First, it doesn’t have a return value. That means you alter your original array, or you do not alter
the array at all. If you decide to modify the array, you are producing side effects that are probably
hard to debug if you don’t change it at all — fine.
Second, it’s unstable. There’s no way I can log results in the order of the original array without
looping synchronously through all elements. The latter option means that the loop would run
much slower. To avoid this, we can use Promise.all in combination with a map. This will result in a
new array with the values received by our asynchronous calls in the same order as the original
array.
const asyncArr = [
new Promise(resolve => setTimeout(resolve.bind(this, 1), 2000)),
new Promise(resolve => setTimeout(resolve.bind(this, 2), 500)),
new Promise(resolve => setTimeout(resolve.bind(this, 3), 5000)),
new Promise(resolve => setTimeout(resolve.bind(this, 4), 1000)),
]
Promise.all(asyncArr)
.then(console.log);// logs: [1,2,3,4] after 5 seconds
Sweet! It won’t cause side effects since it returns a new array (except if the functions
in asyncArr cause side effects — then you’ve lost) and it will return stable results!
This tip is simple yet powerful. There are many situations where you’ll see yourself writing an else
block when it could be avoided with 2 seconds of thinking. So let me introduce you to some
situations where it is unnecessary.
You might object: The examples do not differ much in size — why bother? It’s not about size — it’s
about readability and reducing complexity.
Any if and else statement will increase the complexity of your functions. When encountering an
else block, how long is it? How long was the if-block? Do you still remember where you are when
you read?
There’s a rule of thumb here: Handle errors with if-statements and return as soon as possible.
Then, the function should do what it is supposed to do outside of any if/else.
Junior Developers who come across best practices always ask me: When do I know I violated the
Single Responsibility Principle? When do I realize that my function does more than one thing? If-
else can be an indicator!
function myFun() {
if (x > 10) {
// do something
} else {
// do something else
}
if (y < 100) {
// do something
} else {
// do something else
} return someVar;
}
Especially when having multiple if-“else if”-else blocks in one function, the chances are that you’re
violating the SRP for this function. The example above could be refactored to
function myFunc() {
const xValid = checkX(x);
const yValid = checkY(y);
return xValid && yValid;
}
Of course, the refactoring highly depends on the semantics of your code. However, this would be
one possible way to rewrite the example above to a much cleaner and more readable function.
Or even this:
const x = someVar !== "something"
? somethingElse
: 1;
Okay, the last example is using else (kind of). However, we avoided using let which might be a
source of errors.
Okay, this one is probably the one most of you are familiar with (besides the Promise.all thing,
maybe), but I met so many developers who were unaware of this that I decided to put it on the list.
const as = document.querySelectorAll("a");
First, you might think it’s an array, but it’s not. Proof?
Array.isArray(as);
// -> false
It means it is a so-called “array-like object” (or an iterable). So whenever you encounter this kind
of object, you can create an array from it.
const asArray = Array.from(as);
And this is indeed an array. Now, you’re able to use map, filter, or any other array function on it.
References can cause all sorts of side effects in your code. Being aware of when you’re handling a
reference and when you’re merely working with a value is key to write bug-free software. However,
I won’t go into much detail about what a reference is. You’ll mainly experience this behavior when
working with objects and arrays.
b is only a reference to a, therefore whenever b changes a referenced object key, then it will also be
reflected on a. We can create a new object b without any references to a using several techniques.
5.1 Destructuring
This one has been most popular for quite some time now. Destructuring removes any references.
const a = { key: "value" };
const b = {...a};
b.key = "something else";console.log(a.key); // logs: value
console.log(b.key); // logs: something else
5.2 Object.assign
Destructuring is syntactic sugar for Object.assign, therefore references can also be removed using
this technique:
const b = Object.assign({}, a);
5.3 Array.from
If you’re dealing with Arrays, then you can use Array.from to get rid of references. Here’s the issue:
const arr1 = [1,2,3,4];
const arr2 = arr1;
arr2[0] = 5;console.log(arr1); // -> [5, 2, 3, 4]
Destructuring also works for arrays, of course. Another thing to notice is: Array.from not only
works on “array-like objects,” but on arrays, too.
5.4 Last resort: JSON.stringify
As a last resort, you can stringify an object and parse it again. All references will be cleared.
const a = { key: "value" };
const b = JSON.parse(JSON.stringify(a));
b.key = "something else";console.log(a.key); // logs: value
console.log(b.key); // logs: something else
However, be aware that JSON.stringify also clears any type information. This may cause you some
trouble with Dates and other objects.
If you liked this article, make sure to clap and follow me to show me that you’d like more of that
stuff. Thank you so much for reading and your support!
All major browsers will get many new features in the coming months.
Photo by Callum Hill on Unsplash
All major browsers have agreed on a specific set of features to implement in 2022. The progress of
the so-called “Interop 2022” can be tracked here: https://wpt.fyi/interop-2022?stable. I will tell
you my most anticipated features that will land during Interop 2022.
A new HTML Tag: The dialog Element
TLDR; we have a new feature that will help all of us. In most projects, we have to implement a
Modal. Usually, we will use a div and add some open/close logic to it. This has become such a
common pattern that we’ve got a new Element for it: Dialog. See the browser support
here: https://caniuse.com/?search=dialog. Adaption is really good now; keep in mind that Safari
only supports it since version 15.4.
If you’ve ever written a cross-platform mobile web app, then you know the struggle. What’s the
height of the users’ actual viewport? Disappearing address bars, software keyboards, and other
weird behaviors (safe zones…)left us in despair. But fear not, my fellow developer. There’s hope, at
last! Behold the new Viewport Units: dvh, lvh, and svh.
https://twitter.com/jensimmons/status/1499441043930062854
The image is self-explanatory. I assume dvh will be a life-saver. Unfortunately, it didn’t land in
Chrome yet: https://caniuse.com/?search=dvh
CSS Subgrids
I love how the grid display type gets more and more attention. It has always been in the shadows
of flexboxes. However, it can be much more powerful when used right.
When doing reviews, I get asked this a lot: “When to use flex vs. grid?”
I have already answered this question in another article about the RAM Layout Pattern. TLDR;
Flexboxes are for one-dimensional layouts; grids are for 2-dimensional layouts. Sometimes it can
be that simple!
Two things are still notoriously hard to solve using CSS grids.
The first one is masonry grids. Unfortunately, it’s not really possible to do them with CSS-only
solutions, and it’s probably not going to change soon.
The second one is dealing with subgrids, so a grid within a grid. Consider the following example
(original example by web.dev)
If it looks like that on your browser, then you’re probably using Chrome or Safari:
It should look like that when using a Browser with CSS subgrid support (like Firefox)
The web.dev Codepen with CSS subgrid support
As you can see, this can be a life saver! Having individual columns that share the same height for
their layout is still hard to do in CSS. Here’s the current browser
support: https://caniuse.com/?search=subgrid
The feature you’ll learn about now might not seem as helpful as the other features I have shown
you. But I can certainly find some use cases for this one, too!
There are at least two new color functions added to CSS this year, the one I anticipate the most
ist color-mix. Here’s the syntax:
div {
background-color: color-mix(in hsl, red 50%, yellow 50%);
}
color-mix expects the first parameter to be a color space (such as RGB, hsl, lch, or oklch). The
second and third parameters are colors with their ratio in percent. The result will be a blended
color with the given shares in your desired color space.
If you want to learn more about this specific feature, here’s the official
spec https://drafts.csswg.org/css-color-5/
You can try it out using Firefox nightly or Safari beta: https://caniuse.com/?search=color-mix
Cascade layers
Last but not least: Cascade layers, of course. I mention them now because they have landed cross
browsers, and you’ve probably already seen them in action (e.g., when using Tailwind).
.padding-lg {
padding: .8rem;
}
}
Layers are a complex topic. If you want to learn more about them, read the MDN doc (I will link it
below). However, I’ll give you the TLDR; You may define multiple layers. The order in which they
are defined is important. So, the rules in the last defined layer are the most important. Therefore,
they take precedence. This can help you to solve specificity issues.
Thank you for reading; follow me to stay up to date on web platform features!
Exciting new CSS features in 2022
All major browsers will get many new features in the coming months.
TLDR; we have a new feature that will help all of us. In most projects, we have to implement a
Modal. Usually, we will use a div and add some open/close logic to it. This has become such a
common pattern that we’ve got a new Element for it: Dialog. See the browser support
here: https://caniuse.com/?search=dialog. Adaption is really good now; keep in mind that Safari
only supports it since version 15.4.
CSS Subgrids
I love how the grid display type gets more and more attention. It has always been in the shadows
of flexboxes. However, it can be much more powerful when used right.
When doing reviews, I get asked this a lot: “When to use flex vs. grid?”
I have already answered this question in another article about the RAM Layout Pattern. TLDR;
Flexboxes are for one-dimensional layouts; grids are for 2-dimensional layouts. Sometimes it can
be that simple!
Two things are still notoriously hard to solve using CSS grids.
The first one is masonry grids. Unfortunately, it’s not really possible to do them with CSS-only
solutions, and it’s probably not going to change soon.
The second one is dealing with subgrids, so a grid within a grid. Consider the following example
(original example by web.dev)
If it looks like that on your browser, then you’re probably using Chrome or Safari:
It should look like that when using a Browser with CSS subgrid support (like Firefox)
The web.dev Codepen with CSS subgrid support
As you can see, this can be a life saver! Having individual columns that share the same height for
their layout is still hard to do in CSS. Here’s the current browser
support: https://caniuse.com/?search=subgrid
The feature you’ll learn about now might not seem as helpful as the other features I have shown
you. But I can certainly find some use cases for this one, too!
There are at least two new color functions added to CSS this year, the one I anticipate the most
ist color-mix. Here’s the syntax:
div {
background-color: color-mix(in hsl, red 50%, yellow 50%);
}
color-mix expects the first parameter to be a color space (such as RGB, hsl, lch, or oklch). The
second and third parameters are colors with their ratio in percent. The result will be a blended
color with the given shares in your desired color space.
If you want to learn more about this specific feature, here’s the official
spec https://drafts.csswg.org/css-color-5/
You can try it out using Firefox nightly or Safari beta: https://caniuse.com/?search=color-mix
Cascade layers
Last but not least: Cascade layers, of course. I mention them now because they have landed cross
browsers, and you’ve probably already seen them in action (e.g., when using Tailwind).
.padding-lg {
padding: .8rem;
}
}
Layers are a complex topic. If you want to learn more about them, read the MDN doc (I will link it
below). However, I’ll give you the TLDR; You may define multiple layers. The order in which they
are defined is important. So, the rules in the last defined layer are the most important. Therefore,
they take precedence. This can help you to solve specificity issues.
Thank you for reading; follow me to stay up to date on web platform features!
@Configuration
@EnableWebSocketMessageBroker
class WebSocketConfig: WebSocketMessageBrokerConfigurer {
override fun registerStompEndpoints(registry: StompEndpointRegistry) {
registry.addEndpoint("/stomp").setAllowedOrigins("*")
}
The API endpoint provides a way for microservices (backend) to send messages to the web
application (frontend). As the messages only require a one-way flow (backend → WebSocket
Server → frontend), using APIs will be a good communication medium between microservices
(backend → Websocket server).
@RestController
@RequestMapping("/api/notification")
class NotificationController(private val template: SimpMessagingTemplate) {
@PostMapping
fun newMessage(@RequestBody request: NewMessageRequest) {
template.convertAndSend(request.topic, request.message)
}
}
view raww
The code above creates a REST controller with a POST request endpoint that takes in a request
body “NewMessageRequest” where the topic is the STOMP destination that the client (frontend)
subscribes to and message is the actual message in String format. With this, you can now send a
message via API to the WebSocket server, which will then be forwarded to the web application
(frontend) via WebSocket.
Note: Depending on your use case, you can omit this step if you do not require bidirectional real-
time communication between the web application (frontend) and microservices (backend).
Communication via APIs between microservices (backend and Websocket server) will not be
optimal for real-time communications as compared to using a publish-subscribe messaging
pattern. Hence, for bidirectional communication, we will make use of a publish-subscribe
messaging pattern.
There are many ways to implement a publish-subscribe messaging pattern but for demonstration
and simplicity’s sake, we will use Redis Pub/Sub.
To get started, run a Redis server locally using docker (docker run — name redis-server -p
6379:6379 -d redis) and add the following configuration to the application.yml file for the
WebSocket server to connect to the Redis server.
# application.yml
spring.redis:
host: localhost
port: 6379
Next, create a configuration file, RedisConfig.kt, and add the configuration below. Essentially, we
are configuring a ReactiveRedisTemplate that communicates with the Redis server and is configured
to serialize and deserialize messages as String.
@Configuration
class RedisConfig {
@Bean
fun reactiveRedisTemplate(factory: LettuceConnectionFactory): ReactiveRedisTemplate<String, String> {
val serializer = Jackson2JsonRedisSerializer(String::class.java)
val builder = RedisSerializationContext.newSerializationContext<String, String>(StringRedisSerializer())
val context = builder.value(serializer).build()
return ReactiveRedisTemplate(factory, context)
}
}
Following this, create a RedisService that contains logic for subscribing and publishing to the Redis
server. In the example below, we subscribed to an inbound channel
topic GREETING_CHANNEL_INBOUND which listens for incoming messages from other microservices
(backend) and forwards all messages received to the STOMP destination /topic/greetings.
@Service
class RedisService(
private val reactiveRedisTemplate: ReactiveRedisTemplate<String, String>,
private val websocketTemplate: SimpMessagingTemplate
) {
fun publish(topic: String, message: String) {
reactiveRedisTemplate.convertAndSend(topic, message).subscribe()
}
@PostConstruct
fun subscribe() {
subscribe("GREETING_CHANNEL_INBOUND", "/topic/greetings")
}
}
Lastly, create a Controller that processes messages from the web application (frontend) which are
sent to the WebSocket server with the prefix /app. In the example below, messages sent
to /app/greet will be forwarded (published) to an outbound channel
topic GREETING_CHANNEL_OUTBOUND which will then be processed by any microservice (backend) that is
listening to that channel.
Lastly, create a Controller that processes messages from the web application (frontend) which are
sent to the WebSocket server with the prefix /app. In the example below, messages sent
to /app/greet will be forwarded (published) to an outbound channel
topic GREETING_CHANNEL_OUTBOUND which will then be processed by any microservice (backend) that is
listening to that channel.
With that, we have set up the WebSocket server to act as a middleware (or proxy) that
communicates with the web application (frontend) via WebSocket and communicates with the
microservices (backend) via Redis Pub/Sub.
Testing WebSocket Connection
Using an open-source websocket client debugger tool built by jiangxy as a mock web application
(frontend), we can test the WebSocket server we built above.
Next, send an HTTP POST request to the WebSocket server using the command below:
curl -X POST -d '{"topic": "/topic/toast", "message": "testing API endpoint" }' -H 'Content-
Type: application/json' localhost:8080/api/notification
The WebSocket debugger tool should have the output shown below:
Screenshot of WebSocket debugger tool’s output for sending a message from backend via API
This shows that the WebSocket server has successfully received the message via API and
forwarded the message to the web application (frontend) via WebSocket.
Note that the extra \” is required as the WebSocket server is configured to receive String
messages. The WebSocket debugger tool should receive the message as shown below
Screenshot of WebSocket debugger tool’s output for sending a message from backend via Redis PubSub
This shows that the WebSocket server has successfully received the message via Redis Pub/Sub
and forwarded the message to the web application (frontend) via WebSocket.
This shows that the WebSocket server has successfully received the message via WebSocket and
forwarded the message to the microservices (backend) via Redis Pub/Sub.
Summary
In summary, we have run through a possible design of a WebSocket server in a microservice
architecture. Having a WebSocket server greatly aligns with the “Single Responsibility Principle”
of microservices, where it manages all WebSocket connections to the web application (frontend) as
well as handles real-time communications between the web application (frontend) and other
microservices (backend).
That’s it! I hope you learned something new from this article. Stay tuned for the next one, where
we will look into scaling the WebSocket server.
They almost look identical but there are some difference between them, but before jumping, let us
know what are map() and forEach().
forEach()
This method allows you to execute a callback function by iterating through each element of an
array. Always remember that it doesn’t return anything and if you try to get the value it will
be undefined.
map()
It is almost identical to forEach method and executes a callback function to loop over an array
easily. But the difference is it returns a new array always, so that means it also doesn’t change our
source array. It is, therefore, an immutable operation.
A great thing about the map method is that it’s also chainable, meaning you can call a number of
map operations in a row.
map: chainable
Both methods help us to iterate through our array, and the choice between map and forEach will
depend on your use case.
map vs forEach
If you’re planning to alter the array, we should use map the function, since it doesn’t change the
original array and returns a new array. But if you won’t need the returned array, and just want to
loop through all elements of an array, use the forEach or even a for loop.
Apart from this, this functions are almost identical
That is all from my end, if there are more differences or any mistake in the article, please
share your views in the comment section and thanks for reading it. Check out my other
article on https://medium.com/@aayushtibra1997
What is Node.js?
There is much to be said and explained about Node.js, but I won’t dive too deep into it. Node.js is
an “asynchronous event-driven JavaScript runtime” which basically allows you to use
JavaScript in a non-web browser setting. It’s not a framework nor a coding language. It is
used most commonly for server-side programming.
Starting Node.js
To use Node.js, you must install it first. Then, in your project folder terminal, type npm init, which
should look like this:
npm init
If you don’t want to configure the above descriptions, just run npm init -y. Then, a file
called package.json should be created in your project folder. It contains information about
dependencies and basic information about the project. It is in JSON format (= JavaScript
Object Notation), which is expressing an object in a {key:value} format.
Note: I’ve created a separate index.js file in my project folder, which will be the “entry point”
of my server-side code.
npm packages
Just like Flutter has pub, Node.js has npm (Node Package Manager/Modules), an open
source library for Node.js. npm packages are all declared in the package.json file. To
install/uninstall a package:
npm i packageName --save // install
npm uninstall packageName --save // uninstall
I recommend using the save keyword because it automatically registers it in package.json, which
is convenient when someone else (or you) decides to use the source code. There are a lot of popular
and commonly used packages, and for my project, I will start off with Express.
Express
I feel like Express is the most popular Node.js framework out there as it allows you to handle
HTTP requests easily and flexibly. Install it by typing the following in the project terminal:
npm i express --save
require() in Node.js is basically importing a module in a separate file and returns the exported
object. I think of it as using the keyword as in Dart when you import a package.
import 'package:http/http.dart' as http;
The app returned by express() is a JavaScript function, which is passed to Node’s HTTP
servers as a callback to handle requests. Also, to omit the constant use of const when declaring
variables, you can just continue the declaration after a comma.
If you don’t have the app.listen() function, your node will not start. It is used to bind and listen
to connections on the specified host and port; if it is not specified, the OS will use an arbitrary
unused port. It has a couple of optional parameters, of which I use a callback.
In order to start your sever, you type in the terminal node entryPoint and you can exit by ctrl+
c.
app. methods
app. has a lot of methods. I’ll be going over app.use(), app.get(), and app.post().
app.use()
app.use() mounts the middleware specified (it can be at the application level, or route level if
you specify a path — this applies to app.get() and app.post()). Unlike app.get() or app.post(),
Middleware functions have access to (1) request (2) response and (3) the next middleware
function. I’ll write about middleware functions another day.
app.use(express.urlencoded({ extended: false }));
app.use(express.json());
The extended: false means that the data will be parsed with the querystring library,
and extended: true means that it will be parsed with qs library. qs library allows you to created
a nested object from your query string, whereas querystring library does not. Also, qs will NOT
filter out “?” whereas querystring library will. Unless you have full, complex objects as queries,
extended: false is probably better.
app.get()
app.get() takes a path and a callback as its required parameters. It “routes” the HTTP get
requests to the specified path, with the specified callback function(s). Let’s say I want to get my
usersData.
Note: if you put just { data }, it becomes {data: data} — super convenient.
Response
Express’ response has several methods, some of which are: res.download() — for prompting a
file download, res.end() — ending the response without any data, res.json() — sends a JSON
response, res.redirect() — redirecting to a specified path, res.send() — sends various
responses. For res.send(), if the parameter is a String, the content-type will be “text/html”; if the
parameter is an Array or an Object, the content-type will be JSON.
app.post()
app.post() is basically the same as app.get() except it is used for POST requests. Since you can send
a body with post requests, you’ll be able to view the body by using req.body.
app.post()
express.Router()
express.Router() allows you to make a router as a module and it’s better for code separation. Make
a separate file (or directory)
auth.js
All you need to do in your index.js file is load the module and app.use() it.
const express = require("express"),
app = express(),
auth = require("./lib/server/auth.js");app.use("/auth", auth);