Node - Js Notes For Professionals
Node - Js Notes For Professionals
Node - Js Notes For Professionals
js
Node.js
Notes for Professionals
300+ pages
of professional hints and tricks
Disclaimer
GoalKicker.com This is an unocial free book created for educational purposes and is
not aliated with ocial Node.js group(s) or company(s).
Free Programming Books All trademarks and registered trademarks are
the property of their respective owners
Contents
About ................................................................................................................................................................................... 1
Chapter 1: Getting started with Node.js ............................................................................................................ 2
Section 1.1: Hello World HTTP server ........................................................................................................................... 3
Section 1.2: Hello World command line ....................................................................................................................... 5
Section 1.3: Hello World with Express .......................................................................................................................... 5
Section 1.4: Installing and Running Node.js ................................................................................................................. 6
Section 1.5: Debugging Your NodeJS Application ...................................................................................................... 6
Section 1.6: Hello World basic routing ......................................................................................................................... 7
Section 1.7: Hello World in the REPL ............................................................................................................................ 8
Section 1.8: Deploying your application online ........................................................................................................... 8
Section 1.9: Core modules ............................................................................................................................................. 9
Section 1.10: TLS Socket: server and client ................................................................................................................ 13
Section 1.11: How to get a basic HTTPS web server up and running! ..................................................................... 16
Chapter 2: npm ............................................................................................................................................................ 19
Section 2.1: Installing packages ................................................................................................................................. 19
Section 2.2: Uninstalling packages ............................................................................................................................ 22
Section 2.3: Setting up a package configuration ..................................................................................................... 23
Section 2.4: Running scripts ....................................................................................................................................... 24
Section 2.5: Basic semantic versioning ..................................................................................................................... 24
Section 2.6: Publishing a package ............................................................................................................................. 25
Section 2.7: Removing extraneous packages .......................................................................................................... 25
Section 2.8: Listing currently installed packages ..................................................................................................... 26
Section 2.9: Updating npm and packages ............................................................................................................... 26
Section 2.10: Scopes and repositories ....................................................................................................................... 27
Section 2.11: Linking projects for faster debugging and development .................................................................. 27
Section 2.12: Locking modules to specific versions ................................................................................................. 28
Section 2.13: Setting up for globally installed packages ......................................................................................... 28
Chapter 3: Web Apps With Express .................................................................................................................... 30
Section 3.1: Getting Started ......................................................................................................................................... 30
Section 3.2: Basic routing ........................................................................................................................................... 30
Section 3.3: Modular express application ................................................................................................................. 32
Section 3.4: Using a Template Engine ....................................................................................................................... 33
Section 3.5: JSON API with ExpressJS ....................................................................................................................... 34
Section 3.6: Serving static files ................................................................................................................................... 35
Section 3.7: Adding Middleware ................................................................................................................................. 36
Section 3.8: Error Handling ......................................................................................................................................... 36
Section 3.9: Getting info from the request ................................................................................................................ 37
Section 3.10: Error handling in Express ...................................................................................................................... 37
Section 3.11: Hook: How to execute code before any req and after any res ........................................................ 38
Section 3.12: Setting cookies with cookie-parser ..................................................................................................... 38
Section 3.13: Custom middleware in Express ............................................................................................................ 39
Section 3.14: Named routes in Django-style ............................................................................................................. 39
Section 3.15: Hello World ............................................................................................................................................. 40
Section 3.16: Using middleware and the next callback ............................................................................................ 40
Section 3.17: Error handling ........................................................................................................................................ 42
Section 3.18: Handling POST Requests ...................................................................................................................... 43
Chapter 4: Filesystem I/O ...................................................................................................................................... 45
Section 4.1: Asynchronously Read from Files ........................................................................................................... 45
Section 4.2: Listing Directory Contents with readdir or readdirSync ..................................................................... 45
Section 4.3: Copying files by piping streams ............................................................................................................ 46
Section 4.4: Reading from a file synchronously ...................................................................................................... 47
Section 4.5: Check Permissions of a File or Directory ............................................................................................. 47
Section 4.6: Checking if a file or a directory exists .................................................................................................. 48
Section 4.7: Determining the line count of a text file ............................................................................................... 48
Section 4.8: Reading a file line by line ....................................................................................................................... 49
Section 4.9: Avoiding race conditions when creating or using an existing directory ........................................... 49
Section 4.10: Cloning a file using streams ................................................................................................................. 50
Section 4.11: Writing to a file using writeFile or writeFileSync ................................................................................. 50
Section 4.12: Changing contents of a text file .......................................................................................................... 51
Section 4.13: Deleting a file using unlink or unlinkSync ........................................................................................... 51
Section 4.14: Reading a file into a Buer using streams ......................................................................................... 52
Chapter 5: Exporting and Consuming Modules ........................................................................................... 53
Section 5.1: Creating a hello-world.js module ........................................................................................................... 53
Section 5.2: Loading and using a module ................................................................................................................ 54
Section 5.3: Folder as a module ................................................................................................................................. 55
Section 5.4: Every module injected only once .......................................................................................................... 55
Section 5.5: Module loading from node_modules ................................................................................................... 56
Section 5.6: Building your own modules ................................................................................................................... 56
Section 5.7: Invalidating the module cache .............................................................................................................. 57
Chapter 6: Exporting and Importing Module in node.js .......................................................................... 58
Section 6.1: Exporting with ES6 syntax ...................................................................................................................... 58
Section 6.2: Using a simple module in node.js ......................................................................................................... 58
Chapter 7: How modules are loaded ................................................................................................................ 59
Section 7.1: Global Mode ............................................................................................................................................. 59
Section 7.2: Loading modules .................................................................................................................................... 59
Chapter 8: Cluster Module ..................................................................................................................................... 60
Section 8.1: Hello World .............................................................................................................................................. 60
Section 8.2: Cluster Example ...................................................................................................................................... 60
Chapter 9: Readline ................................................................................................................................................... 62
Section 9.1: Line-by-line file reading .......................................................................................................................... 62
Section 9.2: Prompting user input via CLI ................................................................................................................. 62
Chapter 10: package.json ....................................................................................................................................... 63
Section 10.1: Exploring package.json ......................................................................................................................... 63
Section 10.2: Scripts ..................................................................................................................................................... 66
Section 10.3: Basic project definition ......................................................................................................................... 67
Section 10.4: Dependencies ........................................................................................................................................ 67
Section 10.5: Extended project definition .................................................................................................................. 68
Chapter 11: Event Emitters ..................................................................................................................................... 69
Section 11.1: Basics ....................................................................................................................................................... 69
Section 11.2: Get the names of the events that are subscribed to .......................................................................... 69
Section 11.3: HTTP Analytics through an Event Emitter ........................................................................................... 70
Section 11.4: Get the number of listeners registered to listen for a specific event ............................................... 70
Chapter 12: Autoreload on changes .................................................................................................................. 72
Section 12.1: Autoreload on source code changes using nodemon ....................................................................... 72
Section 12.2: Browsersync .......................................................................................................................................... 72
Chapter 13: Environment ......................................................................................................................................... 74
Section 13.1: Accessing environment variables ......................................................................................................... 74
Section 13.2: process.argv command line arguments ............................................................................................. 74
Section 13.3: Loading environment properties from a "property file" ................................................................... 75
Section 13.4: Using dierent Properties/Configuration for dierent environments like dev, qa, staging etc
................................................................................................................................................................................ 75
Chapter 14: Callback to Promise ........................................................................................................................ 77
Section 14.1: Promisifying a callback ......................................................................................................................... 77
Section 14.2: Manually promisifying a callback ........................................................................................................ 77
Section 14.3: setTimeout promisified ......................................................................................................................... 78
Chapter 15: Executing files or commands with Child Processes ......................................................... 79
Section 15.1: Spawning a new process to execute a command .............................................................................. 79
Section 15.2: Spawning a shell to execute a command ........................................................................................... 79
Section 15.3: Spawning a process to run an executable ......................................................................................... 80
Chapter 16: Exception handling ........................................................................................................................... 82
Section 16.1: Handling Exception In Node.Js ............................................................................................................. 82
Section 16.2: Unhanded Exception Management ..................................................................................................... 83
Section 16.3: Errors and Promises .............................................................................................................................. 84
Chapter 17: Keep a node application constantly running ..................................................................... 86
Section 17.1: Use PM2 as a process manager ........................................................................................................... 86
Section 17.2: Running and stopping a Forever daemon ......................................................................................... 87
Section 17.3: Continuous running with nohup ........................................................................................................... 88
Chapter 18: Uninstalling Node.js ......................................................................................................................... 89
Section 18.1: Completely uninstall Node.js on Mac OSX ........................................................................................... 89
Section 18.2: Uninstall Node.js on Windows .............................................................................................................. 89
Chapter 19: nvm - Node Version Manager ..................................................................................................... 90
Section 19.1: Install NVM .............................................................................................................................................. 90
Section 19.2: Check NVM version ............................................................................................................................... 90
Section 19.3: Installing an specific Node version ...................................................................................................... 90
Section 19.4: Using an already installed node version ............................................................................................ 90
Section 19.5: Install nvm on Mac OSX ........................................................................................................................ 91
Section 19.6: Run any arbitrary command in a subshell with the desired version of node ................................. 91
Section 19.7: Setting alias for node version .............................................................................................................. 92
Chapter 20: http .......................................................................................................................................................... 93
Section 20.1: http server .............................................................................................................................................. 93
Section 20.2: http client .............................................................................................................................................. 94
Chapter 21: Using Streams ..................................................................................................................................... 95
Section 21.1: Read Data from TextFile with Streams ............................................................................................... 95
Section 21.2: Piping streams ....................................................................................................................................... 95
Section 21.3: Creating your own readable/writable stream ................................................................................... 96
Section 21.4: Why Streams? ....................................................................................................................................... 97
Chapter 22: Deploying Node.js applications in production ................................................................... 99
Section 22.1: Setting NODE_ENV="production" ........................................................................................................ 99
Section 22.2: Manage app with process manager ................................................................................................ 100
Section 22.3: Deployment using process manager ............................................................................................... 100
Section 22.4: Deployment using PM2 ...................................................................................................................... 101
Section 22.5: Using dierent Properties/Configuration for dierent environments like dev, qa, staging etc
............................................................................................................................................................................. 102
Section 22.6: Taking advantage of clusters ........................................................................................................... 103
Chapter 23: Securing Node.js applications .................................................................................................. 104
Section 23.1: SSL/TLS in Node.js .............................................................................................................................. 104
Section 23.2: Preventing Cross Site Request Forgery (CSRF) .............................................................................. 104
Section 23.3: Setting up an HTTPS server .............................................................................................................. 105
Section 23.4: Using HTTPS ....................................................................................................................................... 106
Section 23.5: Secure express.js 3 Application ......................................................................................................... 107
Chapter 24: Mongoose Library ......................................................................................................................... 109
Section 24.1: Connect to MongoDB Using Mongoose ........................................................................................... 109
Section 24.2: Find Data in MongoDB Using Mongoose, Express.js Routes and $text Operator ....................... 109
Section 24.3: Save Data to MongoDB using Mongoose and Express.js Routes ................................................. 111
Section 24.4: Find Data in MongoDB Using Mongoose and Express.js Routes .................................................. 113
Section 24.5: Useful Mongoose functions ............................................................................................................... 115
Section 24.6: Indexes in models ............................................................................................................................... 115
Section 24.7: find data in mongodb using promises ............................................................................................. 117
Chapter 25: async.js ................................................................................................................................................ 120
Section 25.1: Parallel : multi-tasking ........................................................................................................................ 120
Section 25.2: async.each(To handle array of data eciently) ............................................................................ 121
Section 25.3: Series : independent mono-tasking .................................................................................................. 122
Section 25.4: Waterfall : dependent mono-tasking ............................................................................................... 123
Section 25.5: async.times(To handle for loop in better way) ............................................................................... 124
Section 25.6: async.series(To handle events one by one) .................................................................................... 124
Chapter 26: File upload .......................................................................................................................................... 125
Section 26.1: Single File Upload using multer ......................................................................................................... 125
Section 26.2: Using formidable module .................................................................................................................. 126
Chapter 27: Socket.io communication ........................................................................................................... 128
Section 27.1: "Hello world!" with socket messages ................................................................................................. 128
Chapter 28: Mongodb integration ................................................................................................................... 129
Section 28.1: Simple connect .................................................................................................................................... 129
Section 28.2: Simple connect, using promises ....................................................................................................... 129
Section 28.3: Connect to MongoDB ......................................................................................................................... 129
Section 28.4: Insert a document .............................................................................................................................. 130
Section 28.5: Read a collection ................................................................................................................................ 130
Section 28.6: Update a document ........................................................................................................................... 131
Section 28.7: Delete a document ............................................................................................................................. 132
Section 28.8: Delete multiple documents ............................................................................................................... 132
Chapter 29: Handling POST request in Node.js ......................................................................................... 134
Section 29.1: Sample node.js server that just handles POST requests ................................................................ 134
Chapter 30: Simple REST based CRUD API .................................................................................................. 135
Section 30.1: REST API for CRUD in Express 3+ ...................................................................................................... 135
Chapter 31: Template frameworks .................................................................................................................. 136
Section 31.1: Nunjucks ................................................................................................................................................ 136
Chapter 32: Node.js Architecture & Inner Workings ............................................................................... 138
Section 32.1: Node.js - under the hood .................................................................................................................... 138
Section 32.2: Node.js - in motion ............................................................................................................................. 138
Chapter 33: Debugging Node.js application ............................................................................................... 139
Section 33.1: Core node.js debugger and node inspector ..................................................................................... 139
Chapter 34: Node server without framework ........................................................................................... 142
Section 34.1: Framework-less node server ............................................................................................................. 142
Section 34.2: Overcoming CORS Issues .................................................................................................................. 143
Chapter 35: Node.JS with ES6 ............................................................................................................................ 144
Section 35.1: Node ES6 Support and creating a project with Babel ..................................................................... 144
Section 35.2: Use JS es6 on your NodeJS app ...................................................................................................... 145
Chapter 36: Interacting with Console ............................................................................................................. 148
Section 36.1: Logging ................................................................................................................................................ 148
Chapter 37: Cassandra Integration ................................................................................................................. 150
Section 37.1: Hello world ........................................................................................................................................... 150
Chapter 38: Creating API's with Node.js ........................................................................................................ 151
Section 38.1: GET api using Express ......................................................................................................................... 151
Section 38.2: POST api using Express ..................................................................................................................... 151
Chapter 39: Graceful Shutdown ........................................................................................................................ 153
Section 39.1: Graceful Shutdown - SIGTERM ........................................................................................................... 153
Chapter 40: Using IISNode to host Node.js Web Apps in IIS .............................................................. 154
Section 40.1: Using an IIS Virtual Directory or Nested Application via <appSettings> ....................................... 154
Section 40.2: Getting Started ................................................................................................................................... 155
Section 40.3: Basic Hello World Example using Express ....................................................................................... 155
Section 40.4: Using Socket.io with IISNode ............................................................................................................. 157
Chapter 41: CLI ........................................................................................................................................................... 158
Section 41.1: Command Line Options ....................................................................................................................... 158
Chapter 42: NodeJS Frameworks .................................................................................................................... 161
Section 42.1: Web Server Frameworks .................................................................................................................... 161
Section 42.2: Command Line Interface Frameworks ............................................................................................ 161
Chapter 43: grunt ..................................................................................................................................................... 163
Section 43.1: Introduction To GruntJs ...................................................................................................................... 163
Section 43.2: Installing gruntplugins ........................................................................................................................ 164
Chapter 44: Using WebSocket's with Node.JS .......................................................................................... 165
Section 44.1: Installing WebSocket's ........................................................................................................................ 165
Section 44.2: Adding WebSocket's to your file's .................................................................................................... 165
Section 44.3: Using WebSocket's and WebSocket Server's .................................................................................. 165
Section 44.4: A Simple WebSocket Server Example .............................................................................................. 165
Chapter 45: metalsmith ........................................................................................................................................ 166
Section 45.1: Build a simple blog .............................................................................................................................. 166
{{ title }} ......................................................................................................................................................................... 166
Chapter 46: Parsing command line arguments ....................................................................................... 167
Section 46.1: Passing action (verb) and values ...................................................................................................... 167
Section 46.2: Passing boolean switches .................................................................................................................. 167
Chapter 47: Client-server communication .................................................................................................. 168
Section 47.1: /w Express, jQuery and Jade ............................................................................................................. 168
Chapter 48: Node.js Design Fundamental .................................................................................................. 170
Section 48.1: The Node.js philosophy ...................................................................................................................... 170
Chapter 49: Connect to Mongodb ................................................................................................................... 171
Section 49.1: Simple example to Connect mongoDB from Node.JS .................................................................... 171
Section 49.2: Simple way to Connect mongoDB with core Node.JS ................................................................... 171
Chapter 50: Performance challenges ............................................................................................................ 172
Section 50.1: Processing long running queries with Node ..................................................................................... 172
Chapter 51: Send Web Notification .................................................................................................................. 176
Section 51.1: Send Web notification using GCM ( Google Cloud Messaging System) ........................................ 176
Chapter 52: Remote Debugging in Node.JS ............................................................................................... 178
Section 52.1: Use the proxy for debugging via port on Linux ............................................................................... 178
Section 52.2: NodeJS run configuration ................................................................................................................. 178
Section 52.3: IntelliJ/Webstorm Configuration ...................................................................................................... 178
Chapter 53: Database (MongoDB with Mongoose) ................................................................................. 180
Section 53.1: Mongoose connection ........................................................................................................................ 180
Section 53.2: Model ................................................................................................................................................... 180
Section 53.3: Insert data ........................................................................................................................................... 181
Section 53.4: Read data ............................................................................................................................................ 181
Chapter 54: Good coding style ......................................................................................................................... 183
Section 54.1: Basic program for signup .................................................................................................................. 183
Chapter 55: Restful API Design: Best Practices ........................................................................................ 187
Section 55.1: Error Handling: GET all resources ...................................................................................................... 187
Chapter 56: Deliver HTML or any other sort of file ................................................................................ 189
Section 56.1: Deliver HTML at specified path .......................................................................................................... 189
Chapter 57: TCP Sockets ...................................................................................................................................... 190
Section 57.1: A simple TCP server ............................................................................................................................ 190
Section 57.2: A simple TCP client ............................................................................................................................. 190
Chapter 58: Hack ...................................................................................................................................................... 192
Section 58.1: Add new extensions to require() ........................................................................................................ 192
Chapter 59: Bluebird Promises .......................................................................................................................... 193
Section 59.1: Converting nodeback library to Promises ........................................................................................ 193
Section 59.2: Functional Promises ........................................................................................................................... 193
Section 59.3: Coroutines (Generators) .................................................................................................................... 193
Section 59.4: Automatic Resource Disposal (Promise.using) ............................................................................... 193
Section 59.5: Executing in series .............................................................................................................................. 194
Chapter 60: Async/Await ...................................................................................................................................... 195
Section 60.1: Comparison between Promises and Async/Await .......................................................................... 195
Section 60.2: Async Functions with Try-Catch Error Handling ............................................................................. 195
Section 60.3: Stops execution at await ................................................................................................................... 196
Section 60.4: Progression from Callbacks .............................................................................................................. 196
Chapter 61: Koa Framework v2 ......................................................................................................................... 198
Section 61.1: Hello World example ........................................................................................................................... 198
Section 61.2: Handling errors using middleware .................................................................................................... 198
Chapter 62: Unit testing frameworks ............................................................................................................. 199
Section 62.1: Mocha Asynchronous (async/await) ................................................................................................ 199
Section 62.2: Mocha synchronous ........................................................................................................................... 199
Section 62.3: Mocha asynchronous (callback) ...................................................................................................... 199
Chapter 63: ECMAScript 2015 (ES6) with Node.js ...................................................................................... 200
Section 63.1: const/let declarations ......................................................................................................................... 200
Section 63.2: Arrow functions ................................................................................................................................... 200
Section 63.3: Arrow Function Example .................................................................................................................... 200
Section 63.4: destructuring ....................................................................................................................................... 201
Section 63.5: flow ....................................................................................................................................................... 201
Section 63.6: ES6 Class .............................................................................................................................................. 201
Chapter 64: Routing AJAX requests with Express.JS ............................................................................. 203
Section 64.1: A simple implementation of AJAX ..................................................................................................... 203
Chapter 65: Sending a file stream to client ................................................................................................ 205
Section 65.1: Using fs And pipe To Stream Static Files From The Server ............................................................ 205
Section 65.2: Streaming Using fluent-mpeg ........................................................................................................ 206
Chapter 66: NodeJS with Redis ......................................................................................................................... 207
Section 66.1: Getting Started .................................................................................................................................... 207
Section 66.2: Storing Key-Value Pairs ..................................................................................................................... 207
Section 66.3: Some more important operations supported by node_redis ....................................................... 209
Chapter 67: Using Browserfiy to resolve 'required' error with browsers .................................... 211
Section 67.1: Example - file.js .................................................................................................................................... 211
Chapter 68: Node.JS and MongoDB. .............................................................................................................. 213
Section 68.1: Connecting To a Database ................................................................................................................ 213
Section 68.2: Creating New Collection .................................................................................................................... 213
Section 68.3: Inserting Documents .......................................................................................................................... 214
Section 68.4: Reading ............................................................................................................................................... 214
Section 68.5: Updating .............................................................................................................................................. 215
Section 68.6: Deleting ............................................................................................................................................... 216
Chapter 69: Passport integration ..................................................................................................................... 217
Section 69.1: Local authentication ........................................................................................................................... 217
Section 69.2: Getting started .................................................................................................................................... 218
Section 69.3: Facebook authentication ................................................................................................................... 219
Section 69.4: Simple Username-Password Authentication ................................................................................... 220
Section 69.5: Google Passport authentication ....................................................................................................... 220
Chapter 70: Dependency Injection .................................................................................................................. 223
Section 70.1: Why Use Dependency Injection ......................................................................................................... 223
Chapter 71: NodeJS Beginner Guide ............................................................................................................... 224
Section 71.1: Hello World ! .......................................................................................................................................... 224
Chapter 72: Use Cases of Node.js .................................................................................................................... 225
Section 72.1: HTTP server ......................................................................................................................................... 225
Section 72.2: Console with command prompt ....................................................................................................... 225
Chapter 73: Sequelize.js ........................................................................................................................................ 227
Section 73.1: Defining Models ................................................................................................................................... 227
Section 73.2: Installation ........................................................................................................................................... 228
Chapter 74: PostgreSQL integration ............................................................................................................. 229
Section 74.1: Connect To PostgreSQL ..................................................................................................................... 229
Section 74.2: Query with Connection Object .......................................................................................................... 229
Chapter 75: MySQL integration ......................................................................................................................... 230
Section 75.1: Connect to MySQL .............................................................................................................................. 230
Section 75.2: Using a connection pool .................................................................................................................... 230
Section 75.3: Query a connection object with parameters ................................................................................... 231
Section 75.4: Query a connection object without parameters ............................................................................. 232
Section 75.5: Run a number of queries with a single connection from a pool ................................................... 232
Section 75.6: Export Connection Pool ...................................................................................................................... 232
Section 75.7: Return the query when an error occurs ........................................................................................... 233
Chapter 76: MySQL Connection Pool .............................................................................................................. 234
Section 76.1: Using a connection pool without database ...................................................................................... 234
Chapter 77: MSSQL Intergration ...................................................................................................................... 235
Section 77.1: Connecting with SQL via. mssql npm module .................................................................................. 235
Chapter 78: Node.js with Oracle ....................................................................................................................... 237
Section 78.1: Connect to Oracle DB ......................................................................................................................... 237
Section 78.2: Using a local module for easier querying ....................................................................................... 237
Section 78.3: Query a connection object without parameters ............................................................................. 238
Chapter 79: Synchronous vs Asynchronous programming in nodejs ............................................ 240
Section 79.1: Using async .......................................................................................................................................... 240
Chapter 80: Node.js Error Management ...................................................................................................... 241
Section 80.1: try...catch block .................................................................................................................................... 241
Section 80.2: Creating Error object ......................................................................................................................... 241
Section 80.3: Throwing Error .................................................................................................................................... 242
Chapter 81: Node.js v6 New Features and Improvement .................................................................... 243
Section 81.1: Default Function Parameters ............................................................................................................. 243
Section 81.2: Rest Parameters ................................................................................................................................. 243
Section 81.3: Arrow Functions ................................................................................................................................... 243
Section 81.4: "this" in Arrow Function ...................................................................................................................... 244
Section 81.5: Spread Operator ................................................................................................................................. 245
Chapter 82: Eventloop ........................................................................................................................................... 246
Section 82.1: How the concept of event loop evolved ........................................................................................... 246
Chapter 83: Nodejs History ................................................................................................................................. 248
Section 83.1: Key events in each year ..................................................................................................................... 248
Chapter 84: passport.js ........................................................................................................................................ 251
Section 84.1: Example of LocalStrategy in passport.js .......................................................................................... 251
Chapter 85: Asynchronous programming ................................................................................................... 252
Section 85.1: Callback functions ............................................................................................................................... 252
Section 85.2: Callback hell ........................................................................................................................................ 254
Section 85.3: Native Promises .................................................................................................................................. 255
Section 85.4: Code example ..................................................................................................................................... 256
Section 85.5: Async error handling ......................................................................................................................... 257
Chapter 86: Node.js code for STDIN and STDOUT without using any library .......................... 258
Section 86.1: Program ............................................................................................................................................... 258
Chapter 87: MongoDB Integration for Node.js/Express.js .................................................................. 259
Section 87.1: Installing MongoDB ............................................................................................................................. 259
Section 87.2: Creating a Mongoose Model ............................................................................................................. 259
Section 87.3: Querying your Mongo Database ...................................................................................................... 260
Chapter 88: Lodash ................................................................................................................................................. 261
Section 88.1: Filter a collection ................................................................................................................................. 261
Chapter 89: csv parser in node js ..................................................................................................................... 262
Section 89.1: Using FS to read in a CSV .................................................................................................................. 262
Chapter 90: Loopback - REST Based connector ...................................................................................... 263
Section 90.1: Adding a web based connector ........................................................................................................ 263
Chapter 91: Running node.js as a service ..................................................................................................... 265
Section 91.1: Node.js as a systemd dæmon ............................................................................................................ 265
Chapter 92: Node.js with CORS .......................................................................................................................... 266
Section 92.1: Enable CORS in express.js .................................................................................................................. 266
Chapter 93: Getting started with Nodes profiling ................................................................................... 267
Section 93.1: Profiling a simple node application ................................................................................................... 267
Chapter 94: Node.js Performance ................................................................................................................... 269
Section 94.1: Enable gzip .......................................................................................................................................... 269
Section 94.2: Event Loop .......................................................................................................................................... 269
Section 94.3: Increase maxSockets ......................................................................................................................... 270
Chapter 95: Yarn Package Manager .............................................................................................................. 272
Section 95.1: Creating a basic package .................................................................................................................. 272
Section 95.2: Yarn Installation .................................................................................................................................. 272
Section 95.3: Install package with Yarn .................................................................................................................. 274
Chapter 96: OAuth 2.0 ............................................................................................................................................ 275
Section 96.1: OAuth 2 with Redis Implementation - grant_type: password ........................................................ 275
Chapter 97: Node JS Localization .................................................................................................................... 281
Section 97.1: using i18n module to maintains localization in node js app ............................................................ 281
Chapter 98: Deploying Node.js application without downtime. ....................................................... 282
Section 98.1: Deployment using PM2 without downtime ....................................................................................... 282
Chapter 99: Node.js (express.js) with angular.js Sample code ......................................................... 284
Section 99.1: Creating our project ............................................................................................................................ 284
Chapter 100: NodeJs Routing ............................................................................................................................. 287
Section 100.1: Express Web Server Routing ............................................................................................................ 287
Chapter 101: Creating a Node.js Library that Supports Both Promises and Error-First
Callbacks ....................................................................................................................................................................... 291
Section 101.1: Example Module and Corresponding Program using Bluebird ..................................................... 291
Chapter 102: Project Structure .......................................................................................................................... 294
Section 102.1: A simple nodejs application with MVC and API .............................................................................. 294
Chapter 103: Avoid callback hell ....................................................................................................................... 296
Section 103.1: Async module ..................................................................................................................................... 296
Section 103.2: Async Module .................................................................................................................................... 296
Chapter 104: Arduino communication with nodeJs ................................................................................ 298
Section 104.1: Node Js communication with Arduino via serialport ..................................................................... 298
Chapter 105: N-API ................................................................................................................................................... 300
Section 105.1: Hello to N-API ..................................................................................................................................... 300
Chapter 106: Multithreading ................................................................................................................................ 302
Section 106.1: Cluster ................................................................................................................................................. 302
Section 106.2: Child Process ..................................................................................................................................... 302
Chapter 107: Windows authentication under node.js ............................................................................ 304
Section 107.1: Using activedirectory ........................................................................................................................ 304
Chapter 108: Require() ........................................................................................................................................... 305
Section 108.1: Beginning require() use with a function and file ............................................................................ 305
Section 108.2: Beginning require() use with an NPM package ............................................................................. 306
Chapter 109: Route-Controller-Service structure for ExpressJS ..................................................... 307
Section 109.1: Model-Routes-Controllers-Services Directory Structure ............................................................... 307
Section 109.2: Model-Routes-Controllers-Services Code Structure ..................................................................... 307
Chapter 110: Push notifications .......................................................................................................................... 309
Section 110.1: Web notification .................................................................................................................................. 309
Section 110.2: Apple ................................................................................................................................................... 310
Appendix A: Installing Node.js ............................................................................................................................ 311
Section A.1: Using Node Version Manager (nvm) .................................................................................................. 311
Section A.2: Installing Node.js on Mac using package manager ......................................................................... 312
Section A.3: Installing Node.js on Windows ............................................................................................................ 312
Section A.4: Install Node.js on Ubuntu .................................................................................................................... 313
Section A.5: Installing Node.js with n ....................................................................................................................... 313
Section A.6: Install Node.js From Source with APT package manager ............................................................... 314
Section A.7: Install Node.js from source on Centos, RHEL and Fedora ............................................................... 314
Section A.8: Installing with Node Version Manager under Fish Shell with Oh My Fish! ...................................... 315
Section A.9: Installing Node.js on Raspberry PI ..................................................................................................... 315
Credits ............................................................................................................................................................................ 317
You may also like ...................................................................................................................................................... 322
About
Please feel free to share this PDF with anyone for free,
latest version of this book can be downloaded from:
http://GoalKicker.com/NodeJSBook
This Node.js Notes for Professionals book is compiled from Stack Overflow
Documentation, the content is written by the beautiful people at Stack Overflow.
Text content is released under Creative Commons BY-SA, see credits at the end
of this book whom contributed to the various chapters. Images may be copyright
of their respective owners unless otherwise specified
This is an unofficial free book created for educational purposes and is not
affiliated with official Node.js group(s) or company(s) nor Stack Overflow. All
trademarks and registered trademarks are the property of their respective
company owners
In this example we'll create an HTTP server listening on port 1337, which sends Hello, World! to the browser. Note
The http module is a Node.js core module (a module included in Node.js's source, that does not require installing
additional resources). The http module provides the functionality to create an HTTP server using the
http.createServer() method. To create the application, create a file containing the following JavaScript code.
// 1. Tell the browser everything is OK (Status code 200), and the data is in plain text
response.writeHead(200, {
'Content-Type': 'text/plain'
});
// 3. Tell the server that all of the response headers and body have been sent
response.end();
Save the file with any file name. In this case, if we name it hello.js we can run the application by going to the
directory the file is in and using the following command:
node hello.js
The created server can then be accessed with the URL http://localhost:1337 or http://127.0.0.1:1337 in the browser.
A simple web page will appear with a “Hello, World!” text at the top, as shown in the screenshot below.
1. Create a new file and paste the code below. The filename is irrelevant.
2. Make this file executable with chmod 700 FILE_NAME
3. Run the app with ./APP_NAME David
#!/usr/bin/env node
'use strict';
/*
The command line arguments are stored in the `process.argv` array,
which has the following structure:
[0] The path of the executable that started the Node.js process
[1] The path to this application
[2-n] the command line arguments
First, create a new folder, e.g. myApp. Go into myApp and make a new JavaScript file containing the following code
(let's name it hello.js for example). Then install the express module using npm install --save express from the
command line. Refer to this documentation for more information on how to install packages.
// Routes HTTP GET requests to the specified path "/" with the specified callback function
app.get('/', function(request, response) {
response.send('Hello, World!');
});
node hello.js
Open your browser and navigate to http://localhost:3000 or http://127.0.0.1:3000 to see the response.
For more information about the Express framework, you can check the Web Apps With Express section
Mac: Navigate to the download page and download/run the installer. Alternatively, you can install Node via
Homebrew using brew install node. Homebrew is a command-line package mananger for Macintosh, and more
information about it can be found on the Homebrew website.
Linux: Follow the instructions for your distro on the command line installation page.
To run a Node.js program, simply run node app.js or nodejs app.js, where app.js is the filename of your node
app source code. You do not need to include the .js suffix for Node to find the script you'd like to run.
Alternatively under UNIX-based operating systems, a Node program may be executed as a terminal script. To do so,
it needs to begin with a shebang pointing to the Node interpreter, such as #!/usr/bin/env node. The file also has
to be set as executable, which can be done using chmod. Now the script can be directly run from the command line.
Debugging natively
To breakpoint your debugger exactly in a code line you want, use this:
debugger;
Then open about://inspect in a recent version of Google Chrome and select your Node script to get the debugging
experience of Chrome's DevTools.
The most basic example of this would be to check if (request.url === 'some/path/here'), and then call a
function that responds with a new file.
response.writeHead(404);
response.end(http.STATUS_CODES[404]);
}).listen(1337);
If you continue to define your "routes" like this, though, you'll end up with one massive callback function, and we
don't want a giant mess like that, so let's see if we can clean this up.
Now that we've stored 2 routes in an object, we can now check for them in our main callback:
if (request.url in routes) {
return routes[request.url](request, response);
}
response.writeHead(404);
response.end(http.STATUS_CODES[404]);
}).listen(1337);
Now every time you try to navigate your website, it will check for the existence of that path in your routes, and it will
call the respective function. If no route is found, the server will respond with a 404 (Not Found).
And there you have it--routing with the HTTP Server API is very simple.
$ node
>
$ node
> "Hello World!"
'Hello World!'
For example,
http.createServer(function(request, response) {
// your server code
}).listen(process.env.PORT);
http.createServer(function(request, response) {
// your server code
}).listen(process.env.PORT || 3000);
[ 'assert',
'buffer',
'c/c++_addons',
'child_process',
'cluster',
'console',
'crypto',
'deprecated_apis',
'dns',
'domain',
'Events',
'fs',
'http',
'https',
'module',
'net',
'os',
'path',
'punycode',
'querystring',
'readline',
'repl',
'stream',
'string_decoder',
'timers',
'tls_(ssl)',
'tracing',
'tty',
'dgram',
'url',
'util',
'v8',
'vm',
'zlib' ]
This list was obtained from the Node documentation API https://nodejs.org/api/all.html (JSON file:
https://nodejs.org/api/all.json).
assert
The assert module provides a simple set of assertion tests that can be used to test invariants.
Prior to the introduction of TypedArray in ECMAScript 2015 (ES6), the JavaScript language had no mechanism for
reading or manipulating streams of binary data. The Buffer class was introduced as part of the Node.js API to make
it possible to interact with octet streams in the context of things like TCP streams and file system operations.
Now that TypedArray has been added in ES6, the Buffer class implements the
Uin
t8Array
API in a manner that is more optimized and suitable for Node.js' use cases.
c/c++_addons
Node.js Addons are dynamically-linked shared objects, written in C or C++, that can be loaded into Node.js using
the require() function , and used just as if they were an ordinary Node.js module. They are used primarily to
provide an interface between JavaScript running in Node.js and C/C++ libraries.
child_process
The child_process module provides the ability to spawn child processes in a manner that is similar, but not
identical, to popen(3).
cluster
A single instance of Node.js runs in a single thread. To take advantage of multi-core systems the user will
sometimes want to launch a cluster of Node.js processes to handle the load. The cluster module allows you to
easily create child processes that all share server ports.
console
The console module provides a simple debugging console that is similar to the JavaScript console mechanism
provided by web browsers.
crypto
The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC,
cipher, decipher, sign and verify functions.
deprecated_apis
Node.js may deprecate APIs when either: (a) use of the API is considered to be unsafe, (b) an improved alternative
API has been made available, or (c) breaking changes to the API are expected in a future major release.
dns
1. Functions that use the underlying operating system facilities to perform name resolution, and that do not
domain
This module is pending deprecation. Once a replacement API has been finalized, this module will be fully
deprecated. Most end users should not have cause to use this module. Users who absolutely must have the
functionality that domains provide may rely on it for the time being but should expect to have to migrate to a
different solution in the future.
Events
Much of the Node.js core API is built around an idiomatic asynchronous event-driven architecture in which certain
kinds of objects (called "emitters") periodically emit named events that cause Function objects ("listeners") to be
called.
fs
File I/O is provided by simple wrappers around standard POSIX functions. To use this module do require('fs'). All
the methods have asynchronous and synchronous forms.
http
The HTTP interfaces in Node.js are designed to support many features of the protocol which have been traditionally
difficult to use. In particular, large, possibly chunk-encoded, messages. The interface is careful to never buffer entire
requests or responses--the user is able to stream data.
https
HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is implemented as a separate module.
module
Node.js has a simple module loading system. In Node.js, files and modules are in one-to-one correspondence (each
file is treated as a separate module).
net
The net module provides you with an asynchronous network wrapper. It contains functions for creating both
servers and clients (called streams). You can include this module with require('net');.
os
path
The path module provides utilities for working with file and directory paths.
punycode
querystring
The querystring module provides utilities for parsing and formatting URL query strings.
readline
The readline module provides an interface for reading data from a Readable stream (such as process.stdin) one
line at a time.
repl
The repl module provides a Read-Eval-Print-Loop (REPL) implementation that is available both as a standalone
program or includible in other applications.
stream
A stream is an abstract interface for working with streaming data in Node.js. The stream module provides a base
API that makes it easy to build objects that implement the stream interface.
There are many stream objects provided by Node.js. For instance, a request to an HTTP server and process.stdout
are both stream instances.
string_decoder
The string_decoder module provides an API for decoding Buffer objects into strings in a manner that preserves
encoded multi-byte UTF-8 and UTF-16 characters.
timers
The timer module exposes a global API for scheduling functions to be called at some future period of time. Because
the timer functions are globals, there is no need to call require('timers') to use the API.
The timer functions within Node.js implement a similar API as the timers API provided by Web Browsers but use a
different internal implementation that is built around the Node.js Event Loop.
tls_(ssl)
The tls module provides an implementation of the Transport Layer Security (TLS) and Secure Socket Layer (SSL)
protocols that is built on top of OpenSSL.
tracing
Trace Event provides a mechanism to centralize tracing information generated by V8, Node core, and userspace
code.
Tracing can be enabled by passing the --trace-events-enabled flag when starting a Node.js application.
tty
The tty module provides the tty.ReadStream and tty.WriteStream classes. In most cases, it will not be necessary
or possible to use this module directly.
url
The url module provides utilities for URL resolution and parsing.
util
The util module is primarily designed to support the needs of Node.js' own internal APIs. However, many of the
utilities are useful for application and module developers as well.
v8
The v8 module exposes APIs that are specific to the version of V8 built into the Node.js binary.
Note: The APIs and implementation are subject to change at any time.
vm
The vm module provides APIs for compiling and running code within V8 Virtual Machine contexts. JavaScript code
can be compiled and run immediately or compiled, saved, and run later.
Note: The vm module is not a security mechanism. Do not use it to run untrusted code.
zlib
The zlib module provides compression functionality implemented using Gzip and Deflate/Inflate.
The first step in this security process is the creation of a private Key. And what is this private key? Basically, it's a set
of random noise that's used to encrypt information. In theory, you could create one key, and use it to encrypt
whatever you want. But it is best practice to have different keys for specific things. Because if someone steals your
private key, it's similar to having someone steal your house keys. Imagine if you used the same key to lock your car,
garage, office, etc.
Once we have our private key, we can create a CSR (certificate signing request), which is our request to have the
private key signed by a fancy authority. That is why you have to input information related to your company. This
information will be seen by the signing authority, and used to verify you. In our case, it doesn’t matter what you
type, since in the next step we're going to sign our certificate ourselves.
Now that we have our paper work filled out, it's time to pretend that we're a cool signing authority.
Important!
Since we created the public cert ourselves, in all honesty, our certificate is worthless, because we are nobodies. The
NodeJS server won't trust such a certificate by default, and that is why we need to tell it to actually trust our cert
with the following option rejectUnauthorized: false. Very important: never set this variable to true in a production
environment.
var options = {
key: fs.readFileSync('private-key.pem'),
cert: fs.readFileSync('public-cert.pem')
};
});
});
});
});
console.error(error);
});
// Pass the certs to the server and let it know to process even unauthorized certs.
var options = {
key: fs.readFileSync('private-key.pem'),
cert: fs.readFileSync('public-cert.pem'),
rejectUnauthorized: false
};
});
client.on("data", function(data) {
});
client.on('close', function() {
console.log("Connection closed");
});
console.error(error);
});
1. create the folder where you want to store your key & certificate :
mkdir conf
2. go to that directory :
cd conf
wget https://raw.githubusercontent.com/anders94/https-authorized-clients/master/keys/ca.cnf
openssl req -new -x509 -days 9999 -config ca.cnf -keyout ca-key.pem -out ca-cert.pem
5. now that we have our certificate authority in ca-key.pem and ca-cert.pem, let's generate a private key for
the server :
wget https://raw.githubusercontent.com/anders94/https-authorized-clients/master/keys/server.cnf
openssl x509 -req -extfile server.cnf -days 999 -passin "pass:password" -in csr.pem -CA ca-
cert.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem
2. update CA store :
sudo update-ca-certificates
First, you want to create a server.js file that contains your actual server code.
The minimal setup for an HTTPS server in Node.js would be something like this :
var httpsOptions = {
key: fs.readFileSync('path/to/server-key.pem'),
cert: fs.readFileSync('path/to/server-crt.pem')
};
https.createServer(httpsOptions, app).listen(4433);
If you also want to support http requests, you need to make just this small modification :
var httpsOptions = {
key: fs.readFileSync('path/to/server-key.pem'),
cert: fs.readFileSync('path/to/server-crt.pem')
};
http.createServer(app).listen(8888);
https.createServer(httpsOptions, app).listen(4433);
cd /path/to
2. run server.js :
Node Package Manager (npm) provides following two main functionalities: Online repositories for node.js
packages/modules which are searchable on search.nodejs.org. Command line utility to install Node.js packages, do
version management and dependency management of Node.js packages.
Package is a term used by npm to denote tools that developers can use for their projects. This includes everything
from libraries and frameworks such as jQuery and AngularJS to task runners such as Gulp.js. The packages will
come in a folder typically called node_modules, which will also contain a package.json file. This file contains
information regarding all the packages including any dependencies, which are additional modules needed to use a
particular package.
Npm uses the command line to both install and manage packages, so users attempting to use npm should be
familiar with basic commands on their operating system i.e.: traversing directories as well as being able to see the
contents of directories.
Installing NPM
Note that in order to install packages, you must have NPM installed.
The recommended way to install NPM is to use one of the installers from the Node.js download page. You can
check to see if you already have node.js installed by running either the npm -v or the npm version command.
After installing NPM via the Node.js installer, be sure to check for updates. This is because NPM gets updated more
frequently than the Node.js installer. To check for updates run the following command:
Note: This will install the package in the directory that the command line is currently in, thus it is
important to check whether the appropriate directory has been chosen
If you already have a package.json file in your current working directory and dependencies are defined in it, then
npm install will automatically resolve and install all dependencies listed in the file. You can also use the shorthand
version of the npm install command which is: npm i
If you want to install a version which matches a specific version range use:
# e.g. to install a version which matches "version >= 4.10.1" and "version < 4.11.1"
# of the package lodash
npm install lodash@">=4.10.1 <4.11.1"
The above commands will search for packages in the central npm repository at npmjs.com. If you are not looking to
install from the npm registry, other options are supported, such as:
# Scoping is useful for separating private packages hosted on private registry from
# public ones by setting registry for specific scope
Usually, modules will be installed locally in a folder named node_modules, which can be found in your current
working directory. This is the directory require() will use to load modules in order to make them available to you.
If you already created a package.json file, you can use the --save (shorthand -S) option or one of its variants to
automatically add the installed package to your package.json as a dependency. If someone else installs your
package, npm will automatically read dependencies from the package.json file and install the listed versions. Note
that you can still add and manage your dependencies by editing the file later, so it's usually a good idea to keep
track of dependencies, for example using:
In order to install packages and save them only if they are needed for development, not for running them, not if
they are needed for the application to run, follow the following command:
Installing dependencies
Some modules do not only provide a library for you to use, but they also provide one or more binaries which are
intended to be used via the command line. Although you can still install those packages locally, it is often preferred
to install them globally so the command-line tools can be enabled. In that case, npm will automatically link the
binaries to appropriate paths (e.g. /usr/local/bin/<name>) so they can be used from the command line. To install
a package globally, use:
If you want to see a list of all the installed packages and their associated versions in the current workspace, use:
npm list
npm list <name>
Adding an optional name argument can check the version of a specific package.
Note: If you run into permission issues while trying to install an npm module globally, resist the temptation to issue
a sudo npm install -g ... to overcome the issue. Granting third-party scripts to run on your system with
elevated privileges is dangerous. The permission issue might mean that you have an issue with the way npm itself
was installed. If you're interested in installing Node in sandboxed user environments, you might want to try using
nvm.
npm install --save-dev <name> // Install development dependencies which is not included in
production
# or
npm install -D <name>
You will see that the package is then added to the devDependencies of your package.json.
npm install
# or
npm i
npm will automatically read the dependencies from package.json and install them.
If your internet access is through a proxy server, you might need to modify npm install commands that access
remote repositories. npm uses a configuration file which can be updated via command line:
You can locate your proxy settings from your browser's settings panel. Once you have obtained the proxy settings
(server URL, port, username and password); you need to configure your npm configurations as follows.
username, password, port fields are optional. Once you have set these, your npm install, npm i -g etc. would work
properly.
The uninstall command for npm has five aliases that can also be used:
If you would like to remove the package from the package.json file as part of the uninstallation, use the --save flag
(shorthand: -S):
For packages that are installed globally use the --global flag (shorthand: -g):
npm init
That will try to read the current working directory for Git repository information (if it exists) and environment
variables to try and autocomplete some of the placeholder values for you. Otherwise, it will provide an input dialog
for the basic options.
If you're creating a package.json for a project that you are not going to be publishing as an npm package (i.e. solely
for the purpose of rounding up your dependencies), you can convey this intent in your package.json file:
The package and associated metadata (such as the package version) will appear in your dependencies. If you save if
as a development dependency (using --save-dev), the package will instead appear in your devDependencies.
With this bare-bones package.json, you will encounter warning messages when installing or upgrading packages,
telling you that you are missing a description and the repository field. While it is safe to ignore these messages, you
can get rid of them by opening the package.json in any text editor and adding the following lines to the JSON object:
[...]
"description": "No description",
"repository": {
"private": true
},
{
"name": "your-package",
"version": "1.0.0",
"description": "",
"main": "index.js",
"author": "",
"license": "ISC",
"dependencies": {},
"devDependencies": {},
"scripts": {
"echo": "echo hello!"
}
}
To run the echo script, run npm run echo from the command line. Arbitrary scripts, such as echo above, have to be
be run with npm run <script name>. npm also has a number of official scripts that it runs at certain stages of the
package's life (like preinstall). See here for the entire overview of how npm handles script fields.
npm scripts are used most often for things like starting a server, building the project, and running tests. Here's a
more realistic example:
"scripts": {
"test": "mocha tests",
"start": "pm2 start index.js"
}
In the scripts entries, command-line programs like mocha will work when installed either globally or locally. If the
command-line entry does not exist in the system PATH, npm will also check your locally installed packages.
If your scripts become very long, they can be split into parts, like this:
"scripts": {
"very-complex-command": "npm run chain-1 && npm run chain-2",
"chain-1": "webpack",
"chain-2": "node app.js"
}
For example, if your package is at version 1.2.3 to change version you have to:
When you set a package version using one of the npm commands above, npm will modify the version field of the
package.json file, commit it, and also create a new Git tag with the version prefixed with a "v", as if you've issued the
command:
Unlike other package managers like Bower, the npm registry doesn't rely on Git tags being created for every
version. But, if you like using tags, you should remember to push the newly created tag after bumping the package
version:
npm login
npm adduser
npm config ls
npm publish
If you need to publish a new version, ensure that you update your package version, as stated in Basic semantic
versioning. Otherwise, npm will not let you publish the package.
{
name: "package-name",
version: "1.0.4"
}
More on it
npm list
ls, la and ll are aliases of list command. la and ll commands shows extended information like description and
repository.
Options
npm outdated
This will update the package to the latest version according to the restrictions in package.json
If the name of your own package starts with @myscope and the scope "myscope" is associated with a different
repository, npm publish will upload your package to that repository instead.
@myscope:registry=http://registry.corporation.com
//registry.corporation.com/:_authToken=xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxxxxx
Help text
NAME
npm-link - Symlink a package folder
SYNOPSIS
npm link (in package dir)
npm link [<@scope>/]<pkg>[@<version>]
alias: npm ln
When creating the dependency link, note that the package name is what is going to be referenced in the parent
project.
Linking projects can sometimes cause issues if the dependency or global tool is already installed. npm uninstall (-
g) <pkg> and then running npm link normally resolves any issues that may arise.
To lock down each dependencies' version (and the versions of their dependencies, etc) to the specific version
installed locally in the node_modules folder, use
npm shrinkwrap
This will then create a npm-shrinkwrap.json alongside your package.json which lists the specific versions of
dependancies.
On many OSes, npm install -g will attempt to write to a directory that your user may not be able to write to such
as /usr/bin. You should not use sudo npm install in this case since there is a possible security risk of running
arbitrary scripts with sudo and the root user may create directories in your home that you cannot write to which
makes future installations more difficult.
You can tell npm where to install global modules to via your configuration file, ~/.npmrc. This is called the prefix
which you can view with npm prefix.
prefix=~/.npm-global-modules
This will use the prefix whenever you run npm install -g. You can also use npm install --prefix ~/.npm-
global-modules to set the prefix when you install. If the prefix is the same as your configuration, you don't need to
use -g.
export PATH=$PATH:~/.npm-global-modules/bin
Now when you run npm install -g gulp-cli you will be able to use gulp.
Note: When you npm install (without -g) the prefix will be the directory with package.json or the current
directory if none is found in the hierarchy. This also creates a directory node_modules/.bin that has the
Express is a minimal and flexible Node.js web application framework, providing a robust set of features for building
web applications.
The official website of Express is expressjs.com. The source can be found on GitHub.
Create a file and name it app.js and add the following code which creates a new Express server and adds one
endpoint to it (/ping) with the app.get method:
app.listen(8080, 'localhost');
Your application will accept connections on localhost port 8080. If the hostname argument to app.listen is
omitted, then server will accept connections on the machine's IP address as well as localhost. If port value is 0, the
operating system will assign an available port.
Once your script is running, you can test it in a shell to confirm that you get the expected response, "pong", from
the server:
You can also open a web browser, navigate to the url http://localhost:8080/ping to view the output
That structure works for all HTTP methods, and expects a path as the first argument, and a handler for that path,
which receives the request and response objects. So, for the basic HTTP methods, these are the routes
// GET www.domain.com/myPath
app.get('/myPath', function (req, res, next) {})
// POST www.domain.com/myPath
app.post('/myPath', function (req, res, next) {})
// PUT www.domain.com/myPath
app.put('/myPath', function (req, res, next) {})
// DELETE www.domain.com/myPath
app.delete('/myPath', function (req, res, next) {})
You can check the complete list of supported verbs here. If you want to define the same behavior for a route and all
HTTP methods, you can use:
or
or
app.route('/myPath')
.get(function (req, res, next) {})
.post(function (req, res, next) {})
.put(function (req, res, next) {})
You can also add functions to any HTTP method. They will run before the final callback and take the parameters
(req, res, next) as arguments.
// GET www.domain.com/myPath
app.get('/myPath', myFunction, function (req, res, next) {})
Your final callbacks can be stored in an external file to avoid putting too much code in one file:
// other.js
exports.doSomething = function(req, res, next) {/* do some stuff */};
Module:
// greet.js
const express = require('express');
return router;
};
Application:
// app.js
const express = require('express');
const greetMiddleware = require('./greet.js');
express()
.use('/api/v1/', greetMiddleware({ greeting:'Hello world' }))
.listen(8080);
This will make your application modular, customisable and your code reusable.
Module:
// greet.js
const express = require('express');
Application:
// app.js
const express = require('express');
const greetMiddleware = require('./greet.js');
class GreetingService {
constructor(greeting = 'Hello') {
this.greeting = greeting;
}
createGreeting(name) {
return `${this.greeting}, ${name}!`;
}
}
express()
.use('/api/v1/service1', greetMiddleware({
service: new GreetingService('Hello'),
}))
.use('/api/v1/service2', greetMiddleware({
service: new GreetingService('Hi'),
}))
.listen(8080);
The following code will setup Jade as template engine. (Note: Jade has been renamed to pug as of December 2015.)
With EJS (like other express templates), you can run server code and access your server variables from you HTML.
In EJS it's done using "<%" as start tag and "%>" as end tag, variables passed as the render params can be accessed
using <%=var_name%>
For instance, if you have supplies array in your server code
you can loop over it using
As you can see in the example every time you switch between server side code and HTML you need to close the
current EJS tag and open a new one later, here we wanted to create li inside the for command so we needed to
close our EJS tag at the end of the for and create new tag just for the curly brackets
another example
if we want to put input default version to be a variable from the server side we use <%=
for example:
Message:<br>
<input type="text" value="<%= message %>" name="message" required>
Here the message variable passed from your server side will be the default value of your input, please be noticed
that if you didn't pass message variable from your server side, EJS will throw an exception. You can pass parameters
using res.render('index', {message: message}); (for ejs file called index.ejs).
In the EJS tags you can also use if , while or any other javascript command you want.
// or
app.listen(port, function() {
console.log('Node.js listening on port ' + port)
})
{
string_value: "StackOverflow",
number_value: 8476
}
For example, you may have index.html and script.js which are static files kept in the file system.
It is common to use folder named 'public' to have static files. In this case the folder structure may look like:
project root
??? server.js
??? package.json
??? public
??? index.html
??? script.js
app.use(express.static('public'));
Note: once the folder is configured, index.html, script.js and all the files in the "public" folder will be available in at
the root path (you must not specify /public/ in the url). This is because, express looks up for the files relative to the
static folder configured. You can specify virtual path prefix as shown below:
app.use('/static', express.static('public'));
Multiple folders
app.use(express.static('public'));
app.use(express.static('images'));
When serving the resources Express will examine the folder in definition order. In case of files with the same name,
the one in the first matching folder will be served.
Middleware functions can execute any code, make changes to res and req objects, end response cycle and call next
middleware.
Very common example of middleware is cors module. To add CORS support, simply install it, require it and put this
line:
app.use(cors());
By default, Express will look for an 'error' view in the /views directory to render. Simply create the 'error' view and
place it in the views directory to handle errors. Errors are written with the error message, status and stack trace, for
example:
views/error.pug
html
body
h1= message
h2= error.status
p= error.stack
Define your error-handling middleware functions at the very end of the middleware function stack. These have four
arguments instead of three (err, req, res, next) for example:
app.js
You can define several error-handling middleware functions, just as you would with regular middleware functions.
req.get('Content-Type')
// "text/plain"
To simplify getting other info you can use middlewares. For example, to get the body info of the request, you can
use the body-parser middleware, which will transform raw request body into usable format.
PUT /settings/32135
{
"name": "Peter"
}
req.body.name
// "Peter"
In a similar way, you can access cookies from the request, you also need a middleware like cookie-parser
req.cookies.name
//GET /names/john
app.get('/names/:name', function(req, res, next){
if (req.params.name == 'john'){
return res.send('Valid Name');
} else{
next(new Error('Not valid name')); //pass to error handler
}
});
//error handler
app.use(function(err, req, res, next){
console.log(err.stack); // e.g., Not valid name
return res.status(500).send('Internal Server Occurred');
});
app.listen(3000);
Section 3.11: Hook: How to execute code before any req and
after any res
app.use() and middleware can be used for "before" and a combination of the close and finish events can be used
for "after".
An example of this is the logger middleware, which will append to the log after the response by default.
Just make sure this "middleware" is used before app.router as order does matter.
app.listen(3000);
Example
The following code adds user to the request object and pass the control to the next matching route.
app.listen(3000);
The downside of this approach is that you cant use route Express module as shown in Advanced router usage. The
workaround is to pass your app as a parameter to you router factory:
require('./middlewares/routing')(app);
You can figure it out from now on, how define functions to merge it with specified custom namespaces and point at
appropriate controllers.
'/'
'/wiki'
And for rest will give "404" , i.e. page not found.
'use strict';
app.get('/',(req,res)=>res.send('HelloWorld!'));
app.get('/wiki',(req,res)=>res.send('This is wiki page.'));
app.use((req,res)=>res.send('404-PageNotFound'));
Note: We have put 404 route as the last route as Express stacks routes in order and processes them for each
request sequentially.
Requests to /api/foo or to /api/bar will run the initial handler to look up the member and then pass control to the
actual handler for each route.
Error handler
Error handlers are middleware with the signature function(err, req, res, next). They could be set up per route
(e.g. app.get('/foo', function(err, req, res, next)) but typically, a single error handler that renders an error
page is sufficient.
Middleware
Each of the functions above is actually a middleware function that is run whenever a request matches the route
defined, but any number of middleware functions can be defined on a single route. This allows middleware to be
defined in separate files and common logic to be reused across multiple routes.
In this example, each middleware function would be either in it's own file or in a variable elsewhere in the file so
that it could be reused in other routes.
if (req.xhr) // if req via ajax then send json else render error-page
res.json(err);
else
res.render('error.html', {error: err.message});
});
// Error handler 2
app.use(function(err, req, res, next)) {
// do smth here e.g. check that error is MyError
if (err instanceof MyError) {
console.log(err.message, err.arg1, err.arg2);
}
...
res.end();
});
Appendix A
Appendix B
But before you can handle POST requests, you will need to use the body-parser middleware. It simply parses the
body of POST, PUT, DELETE and other requests.
Body-Parser middleware parses the body of the request and turns it into an object available in req.body
});
app.listen(8080, 'localhost');
const fs = require('fs');
With Encoding
In this example, read hello.txt from the directory /tmp. This operation will be completed in the background and
the callback occurs on completion or failure:
Without Encoding
Read the binary file binary.txt from the current directory, asynchronously in the background. Note that we do not
set the 'encoding' option - this prevents Node.js from decoding the contents into a string:
Relative paths
Keep in mind that, in general case, your script could be run with an arbitrary current working directory. To address
a file relative to the current script, use __dirname or __filename:
A synchronous variant is available as readdirSync which blocks the main thread and therefore prevents execution
of asynchronous code at the same time. Most developers avoid synchronous IO functions in order to improve
performance.
let files;
try {
files = fs.readdirSync('/var/tmp');
} catch(err) {
// An error occurred
console.error(err);
}
Using a generator
const fs = require('fs');
return iter.next(data);
});
iter.next();
}
/*
Create readable stream to file in current directory named 'node.txt'
Use utf8 encoding
Read the data in 16-kilobyte chunks
*/
var readable = fs.createReadStream(__dirname + '/node.txt', { encoding: 'utf8', highWaterMark: 16 *
1024 });
const fs = require('fs');
Reading a String
fs.readFileSync behaves similarly to fs.readFile, but does not take a callback as it completes synchronously and
therefore blocks the main thread. Most node.js developers prefer the asynchronous variants which will cause
virtually no delay in the program execution.
If an encoding option is specified, a string will be returned, otherwise a Buffer will be returned.
fs.constants.F_OK - Has read/write/execute permissions (If no mode is provided, this is the default)
fs.constants.R_OK - Has read permissions
fs.constants.W_OK - Has write permissions
fs.constants.X_OK - Has execute permissions (Works the same as fs.constants.F_OK on Windows)
Asynchronously
var fs = require('fs');
var path = '/path/to/check';
Synchronously
fs.access also has a synchronous version fs.accessSync. When using fs.accessSync you must enclose it within a
try/catch block.
fs.stat('path/to/file', function(err) {
if (!err) {
console.log('file or directory exists');
}
else if (err.code === 'ENOENT') {
console.log('file or directory does not exist');
}
});
Synchronously
here, we must wrap the function call in a try/catch block to handle error.
var fs = require('fs');
try {
fs.statSync('path/to/file');
console.log('file or directory exists');
}
catch (err) {
if (err.code === 'ENOENT') {
console.log('file or directory does not exist');
}
}
Usage:
node app
Usage:
node app
can lead to a race condition if the folder is created between the time of the check and the time of the creation. The
method below wraps fs.mkdir() and fs.mkdirSync() in error-catching wrappers that let the exception pass if its
code is EEXIST (already exists). If the error is something else, like EPERM (pemission denied), throw or pass an error
like the native functions do.
var fs = require('fs');
if (err)
return console.error(err.code);
});
mkdirSync('./existing-dir');
// Do something with `./existing-dir` now
/*
Create readable stream to file in current directory (__dirname) named 'node.txt'
Use utf8 encoding
Read the data in 16-kilobyte chunks
*/
var readable = fs.createReadStream(__dirname + '/node.txt', { encoding: 'utf8', highWaterMark: 16 *
1024 });
fs.writeFileSync behaves similarly to fs.writeFile, but does not take a callback as it completes synchronously
and therefore blocks the main thread. Most node.js developers prefer the asynchronous variants which will cause
virtually no delay in the program execution.
Note: Blocking the main thread is bad practice in node.js. Synchronous function should only be used when debugging or
when no other options are availables.
// Write a string to another file and set the file mode to 0755
try {
fs.writeFileSync('sync.txt', 'anni', { mode: 0o755 });
} catch(err) {
// An error occurred
console.error(err);
}
var fs = require('fs');
var fs = require('fs');
fs.unlink('/path/to/file.txt', function(err) {
if (err) throw err;
var fs = require('fs');
fs.unlinkSync('/path/to/file.txt');
console.log('file deleted');
* avoid synchronous methods because they block the entire process until the execution finishes.
const fs = require('fs');
// Of course, you can do anything else you need to here, like emit an event!
});
hello-world.js
module.exports = function(subject) {
console.log('Hello ' + subject);
};
If we don't want the entire export to be a single object, we can export functions and variables as properties of the
exports object. The three following examples all demonstrate this in slightly different ways :
hello-venus.js : the function definition is done separately then added as a property of module.exports
hello-jupiter.js : the functions definitions are directly put as the value of properties of module.exports
hello-mars.js : the function definition is directly declared as a property of exports which is a short version of
module.exports
hello-venus.js
function hello(subject) {
console.log('Venus says Hello ' + subject);
}
module.exports = {
hello: hello
};
hello-jupiter.js
module.exports = {
hello: function(subject) {
console.log('Jupiter says hello ' + subject);
},
bye: function(subject) {
console.log('Jupiter says goodbye ' + subject);
}
};
hello-mars.js
exports.hello = function(subject) {
console.log('Mars says Hello ' + subject);
};
index.js
main.js
// hello/main.js
// We can include the other files we've defined by using the `require()` method
var hw = require('./hello-world.js'),
hm = require('./hello-mars.js'),
hv = require('./hello-venus.js'),
hj = require('./hello-jupiter.js'),
hu = require('./index.js');
// In this case, we assigned our function to the `hello` property of exports, so we must
// use that here too
hm.hello('Solar System!'); // outputs "Mars says Hello Solar System!"
Aside from modules that are shipped with the runtime, you can also require modules that you have installed from
npm, such as express. If you had already installed express on your system via npm install express, you could
simply write:
You can also include modules that you have written yourself as part of your application. In this case, to include a file
named lib.js in the same directory as current file:
Note that you can omit the extension, and .js will be assumed. Once you load a module, the variable is populated
with an object that contains the methods and properties published from the required file. A full example:
function_one.js
module.exports = function() {
return 1;
}
function_two.js
module.exports = function() {
return 2;
}
index.js
exports.f_one = require('./function_one.js');
exports.f_two = require('./function_two.js');
Please note that if you required it by omitting ./ or any indication of a path to a folder from the require function
argument, Node will try to load a module from the node_modules folder.
Alternatively you can create in the same folder a package.json file with these contents:
{
"name": "my_module",
"main": "./your_main_entry_point.js"
}
This way you are not required to name the main module file "index".
myModule.js
console.log(123) ;
exports.var1 = 4 ;
For example, to require a module called foo from a file index.js, you can use the following directory structure:
index.js
\- node_modules
\- foo
|- foo.js
\- package.json
Modules should be placed inside a directory, along with a package.json file. The main field of the package.json file
should point to the entry point for your module--this is the file that is imported when users do require('your-
module'). main defaults to index.js if not provided. Alternatively, you can refer to files relative to your module
simply by appending the relative path to the require call: require('your-module/path/to/file').
Modules can also be required from node_modules directories up the file system hierarchy. If we have the following
directory structure:
my-project
\- node_modules
|- foo // the foo module
\- ...
\- baz // the baz module
\- node_modules
\- bar // the bar module
we will be able to require the module foo from any file within bar using require('foo').
Note that node will only match the module that is closest to the file in the filesystem hierarchy, starting from (the
file's current directory/node_modules). Node matches directories this way up to the file system root.
You can either install new modules from the npm registry or other npm registries, or make your own.
To use any of these, just require the module as you normally would:
req.user = user
next()
})
}
To get around this issue, you will have to delete the entry in the cache. For example, if you loaded a module:
var a = require('./a');
var a = require('./a');
Do note that this is not recommended in production because the delete will only delete the reference to the
loaded module, not the loaded data itself. The module is not garbage collected, so improper use of this feature
could lead to leaking memory.
A module encapsulates related code into a single unit of code. When creating a module, this can be
interpreted as moving all related functions into a file.
Now lets see an example. Imagine all files are in same directory:
File: printer.js
"use strict";
File animals.js
"use strict";
module.exports = {
lion: function() {
console.log("ROAARR!!!");
}
};
File: app.js
Run this file by going to your directory and typing: node app.js
"use strict";
Make sure that you have sufficient access rights to the folder. These modules will be available for all node process
which will be running in that machine
In local mode installation. Npm will down load and install modules in the current working folders by creating a new
folder called node_modules for example if you are in /home/user/apps/my_app a new folder will be created called
node_modules /home/user/apps/my_app/node_modules if its not already exist
If Node fails to find the file, it will look inside the parent folder called ../node_modules/myModule.js. If it fails again,
it will try the parent folder and keep descending until it reaches the root or finds the required module.
You can also omit the .js extension if you like to, in which case node will append the .js extension and will search
for the file.
You can use the path for a folder to load a module like this:
If you do so, Node will search inside that folder. Node will presume this folder is a package and will try to look for a
package definition. That package definition should be a file named package.json. If that folder does not contain a
package definition file named package.json, the package entry point will assume the default value of index.js, and
Node will look, in this case, for a file under the path ./myModuleDir/index.js.
The last resort if module is not found in any of the folders is the global module installation folder.
if (cluster.isMaster) {
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
function startServer() {
const server = http.createServer((req, res) => {
res.writeHead(200);
res.end('Hello Http');
});
server.listen(3000);
}
if(!module.parent) {
// Start server if file is run directly
startServer();
} else {
// Export server, if file is referenced via cluster
module.exports = startServer;
}
In this example, we host a basic web server, however, we spin up workers (child processes) using the built-in
cluster module. The number of processes forker depend on the number of CPU cores available. This enables a
Node.js application to take advantage of multi-core CPUs, since a single instance of Node.js runs in a single thread.
The application will now share the port 8000 across all the processes. Loads will automatically be distributed
between workers using the Round-Robin method by default.
Following example create the worker child process in main process that handles the load across multiple cores.
Example
if (cluster.isMaster) {
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork(); //creating child process
}
const rl = readline.createInterface({
input: fs.createReadStream('text.txt')
});
// Each new line emits an event - every time the stream receives \r, \n, or \r\n
rl.on('line', (line) => {
console.log(line);
});
rl.on('close', () => {
console.log('Done reading file');
});
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.close();
});
NPM aliases -S to --save and -D to --save-dev to save in your production or development dependencies
respectively.
The package will appear in your dependencies; if you use --save-dev instead of --save, the package will appear in
your devDependencies.
{
"name": "module-name",
"version": "10.3.1",
"description": "An example module to illustrate the usage of a package.json",
"author": "Your Name <your.name@example.org>",
"contributors": [{
"name": "Foo Bar",
"email": "foo.bar@example.com"
}],
"bin": {
"module-name": "./bin/module-name"
},
"scripts": {
"test": "vows --spec --isolate",
"start": "node index.js",
"predeploy": "echo About to deploy",
"postdeploy": "echo Deployed",
"prepublish": "coffee --bare --compile --output lib/foo src/foo/*.coffee"
},
"main": "lib/foo.js",
"repository": {
"type": "git",
"url": "https://github.com/username/repo"
},
"bugs": {
"url": "https://github.com/username/issues"
name
The unique name of your package and should be down in lowercase. This property is required and your package
will not install without it.
version
The version of the package is specified by Semantic Versioning (semver). Which assumes that a version number is
written as MAJOR.MINOR.PATCH and you increment the:
description
author
bin
An object which is used to expose binary scripts from your package. The object assumes that the key is the name of
This property is used by packages that contain a CLI (command line interface).
script
A object which exposes additional npm commands. The object assumes that the key is the npm command and the
value is the script path. These scripts can get executed when you run npm run {command name} or npm run-script
{command name}.
Packages that contain a command line interface and are installed locally can be called without a relative path. So
instead of calling ./node-modules/.bin/mocha you can directly call mocha.
main
The main entry point to your package. When calling require('{module name}') in node, this will be actual file that
is required.
It's highly advised that requiring the main file does not generate any side affects. For instance, requiring the main
file should not start up a HTTP server or connect to a database. Instead, you should create something like
exports.init = function () {...} in your main script.
keywords
An array of keywords which describe your package. These will help people find your package.
devDependencies
These are the dependencies that are only intended for development and testing of your module. The dependencies
will be installed automatically unless the NODE_ENV=production environment variable has been set. If this is the
case you can still these packages using npm install --dev
peerDependencies
If you are using this module, then peerDependencies lists the modules you must install alongside this one. For
example, moment-timezone must be installed alongside moment because it is a plugin for moment, even if it doesn't
directly require("moment").
preferGlobal
A property that indicates that this page prefers to be installed globally using npm install -g {module-name}. This
property is used by packages that contain a CLI (command line interface).
publishConfig
The publishConfig is an object with configuration values that will be used for publishing modules. The configuration
values that are set override your default npm configuration.
The most common use of the publishConfig is to publish your package to a private npm registry so you still have
the benefits of npm but for private packages. This is done by simply setting URL of your private npm as value for
the registry key.
This is an array of all the files to include in the published package. Either a file path or folder path can be used. All
the contents of a folder path will be included. This reduces the total size of your package by only including the
correct files to be distributed. This field works in conjunction with a .npmignore rules file.
Source
{
"scripts": {
"pretest": "scripts/pretest.js",
"test": "scripts/test.js",
"posttest": "scripts/posttest.js"
}
}
In this case, you can execute the script by running either of these commands:
Pre-defined scripts
Script Name Description
prepublish Run before the package is published.
publish, postpublish Run after the package is published.
preinstall Run before the package is installed.
install, postinstall Run after the package is installed.
preuninstall, uninstall Run before the package is uninstalled.
postuninstall Run after the package is uninstalled.
preversion, version Run before bump the package version.
postversion Run after bump the package version.
pretest, test, posttest Run by the npm test command
prestop, stop, poststop Run by the npm stop command
prestart, start, poststart Run by the npm start command
prerestart, restart, postrestart Run by the npm restart command
User-defined scripts
You can also define your own scripts the same way you do with the pre-defined scripts:
{
"scripts": {
"preci": "scripts/preci.js",
"ci": "scripts/ci.js",
"postci": "scripts/postci.js"
}
}
In this case, you can execute the script by running either of these commands:
$ npm run-script ci
User-defined scripts also supports pre and post scripts, as shown in the example above.
After adding them to your package.json, use the command npm install in your project directory in terminal.
devDependencies
"devDependencies": {
"module-name": "0.1.0"
}
For dependencies required only for development, like testing styling proxies ext. Those dev-dependencies won't be
installed when running "npm install" in production mode.
{
"main": "server.js",
"repository" : {
"type": "git",
"url": "git+https://github.com/<accountname>/<repositoryname>.git"
},
"bugs": {
"url": "https://github.com/<accountname>/<repositoryname>/issues"
},
"homepage": "https://github.com/<accountname>/<repositoryname>#readme",
"files": [
"server.js", // source files
"README.md", // additional files
"lib" // folder with all included files
]
}
Field Description
main Entry script for this package. This script is returned when a user requires the package.
repository Location and type of the public repository
bugs Bugtracker for this package (e.g. github)
homepage Homepage for this package or the general project
files List of files and folders which should be downloaded when a user does a npm install <packagename>
In the above example, the dog is the publisher/EventEmitter, while the function that checks the item was the
subscriber/listener. You can make more listeners too:
myDog.on('bark', () => {
console.log('WHO\'S AT THE DOOR?');
// Panic
});
There can also be multiple listeners for a single event, and even remove listeners:
myDog.on('chew', takeADeepBreathe);
myDog.on('chew', calmDown);
// Undo the previous line with the next one:
myDog.removeListener('chew', calmDown);
myDog.once('chew', pet);
Section 11.2: Get the names of the events that are subscribed
to
The function EventEmitter.eventNames() will return an array containing the names of the events currently
subscribed to.
emitter
.on("message", function(){ //listen for message event
console.log("a message was emitted!");
})
.on("message", function(){ //listen for message event
console.log("this is not the right message");
})
.on("data", function(){ //listen for data event
console.log("a data just occurred!!");
});
Run in RunKit
Whenever the server gets a request, it will emit an event called request which the supervisor is listening for, and
then the supervisor can react to the event.
emitter
.on("data", ()=>{ // add listener for data event
console.log("data event emitter");
});
console.log(emitter.listenerCount("data")) // => 1
console.log(emitter.listenerCount("message")) // => 0
console.log(emitter.listenerCount("data")) // => 2
console.log(emitter.listenerCount("message"))// => 1
Using nodemon
This replaces the usual use of node entry.js (or node entry).
You can also add your nodemon startup as an npm script, which might be useful if you want to supply parameters
and not type them out every time.
Add package.json:
"scripts": {
"start": "nodemon entry.js -devmode -something 1"
}
This way you can just use npm start from your console.
Browsersync is a tool that allows for live file watching and browser reloading. It's available as a NPM package.
Installation
To install Browsersync you'll first need to have Node.js and NPM installed. For more information see the SO
documentation on Installing and Running Node.js.
Once your project is set up you can install Browsersync with the following command:
This will install Browsersync in the local node_modules directory and save it to your developer dependencies.
If you'd rather install it globally use the -g flag in place of the -D flag.
Windows Users
Basic Usage
To automatically reload your site whenever you change a JavaScript file in your project use the following command:
Replace myproject.dev with the web address that you are using to access your project. Browsersync will output an
alternate address that can be used to access your site through the proxy.
Advanced Usage
Besides the command line interface that was described above Browsersync can also be used with Grunt.js and
Gulp.js.
Grunt.js
Usage with Grunt.js requires a plugin that can be installed like so:
grunt.loadNpmTasks('grunt-browser-sync');
Gulp.js
Browsersync works as a CommonJS module, so there's no need for a Gulp.js plugin. Simply require the module like
so:
You can now use the Browsersync API to configure it to your needs.
API
{
TERM: 'xterm-256color',
SHELL: '/usr/local/bin/bash',
USER: 'maciej',
PATH: '~/.bin/:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin',
PWD: '/Users/maciej',
EDITOR: 'vim',
SHLVL: '1',
HOME: '/Users/maciej',
LOGNAME: 'maciej',
_: '/usr/local/bin/node'
}
process.env.HOME // '/Users/maciej'
process.env.FOO // 'foobar'
Code Example:
index.js
var sum = 0;
for (i = 2; i < process.argv.length; i++) {
sum += Number(process.argv[i]);
}
console.log(sum);
Usage Exaple:
node index.js 2 5 6 7
Output will be 20
Here in for loop for (i = 2; i < process.argv.length; i++) loop begins with 2 because first two elements in
process.argv array always is ['path/to/node.exe', 'path/to/js/file', ...]
mkdir env
Create environments.js:
# Dev properties
[main]
# Application port to run the node server
app.port=8080
[database]
# Database connection to mysql
mysql.host=localhost
mysql.port=2500
...
or
dev.json
{
PORT : 3000,
DB : {
host : "localhost",
user : "bob",
password : "12345"
}
}
qa.json
{
PORT : 3001,
DB : {
host : "where_db_is_hosted",
user : "bob",
password : "54321"
}
}
Following code in application will export respective property file which we want to use.
This uses bluebird's promisifyAll method to promisify what is conventionally callback-based code like above.
bluebird will make a promise version of all the methods in the object, those promise-based methods names has
Async appended to them:
There are some libraries (e.g., MassiveJS) that can't be promisified if the immediate object of the method is not
passed on second parameter. In that case, just pass the immediate object of the method that need to be
promisified on second parameter and enclosed it in context property.
var fs = require('fs');
This method spawns a new process using a given command and an array of arguments. The return value is an
instance of ChildProcess, which in turn provides the stdout and stderr properties. Both of those streams are
instances of stream.Readable.
The following code is equivalent to using running the command ls -lh /usr.
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
});
The command parameter is a string, and is required, while the options object and callback are both optional. If no
options object is specified, then exec will use the following as a default:
{
encoding: 'utf8',
timeout: 0,
maxBuffer: 200*1024,
killSignal: 'SIGTERM',
cwd: null,
env: null
}
The options object also supports a shell parameter, which is by default /bin/sh on UNIX and cmd.exe on Windows,
a uid option for setting the user identity of the process, and a gid option for the group identity.
The callback, which is called when the command is done executing, is called with the three arguments (err,
stdout, stderr). If the command executes successfully, err will be null, otherwise it will be an instance of Error,
with err.code being the exit code of the process and err.signal being the signal that was sent to terminate it.
The stdout and stderr arguments are the output of the command. It is decoded with the encoding specified in the
options object (default: string), but can otherwise be returned as a Buffer object.
There also exists a synchronous version of exec, which is execSync. The synchronous version does not take a
callback, and will return stdout instead of an instance of ChildProcess. If the synchronous version encounters an
error, it will throw and halt your program. It looks like this:
console.log(stdout);
});
Unlike child_process.exec, this function will accept up to four parameters, where the second parameter is an
array of arguments you'd like to supply to the executable:
Otherwise, the options and callback format are otherwise identical to child_process.exec. The same goes for the
synchronous version of the function:
1. try-catch block
2. error as the first argument to a callback
3. emit an error event using eventEmitter
try-catch is used to catch the exceptions thrown from the synchronous code execution. If the caller (or the caller's
caller, ...) used try/catch, then they can catch the error. If none of the callers had try-catch than the program
crashes.
If using try-catch on an async operation and exception was thrown from callback of async method than it will not
get caught by try-catch. To catch an exception from async operation callback, it is preferred to use promises.
Example to understand it better
// ** Example - 1 **
function doSomeSynchronousOperation(req, res) {
if(req.body.username === ''){
throw new Error('User Name cannot be empty');
}
return true;
}
// ** Example - 2 **
function doSomeAsynchronousOperation(req, res, cb) {
// imitating async operation
return setTimeout(function(){
cb(null, []);
},1000);
}
try {
// asynchronous code
doSomeAsynchronousOperation(req, res, function(err, rs){
throw new Error("async operation exception");
})
} catch(e) {
// Exception will not get handled here
console.log(e.message);
}
// The exception is unhandled and hence will cause application to break
callbacks are mostly used in Node.js as callback delivers an event asynchronously. The user passes you a function
(the callback), and you invoke it sometime later when the asynchronous operation completes.
The usual pattern is that the callback is invoked as a callback(err, result), where only one of err and result is non-null,
depending on whether the operation succeeded or failed.
emit For more complicated cases, instead of using a callback, the function itself can return an EventEmitter object,
and the caller would be expected to listen for error events on the emitter.
// runs asynchronously
setTimeout(function(){
myEvent.emit('error', new Error('User Name cannot be empty'));
}, 1000);
return myEvent;
}
event.on('error', function(err) {
console.log(err);
});
event.on('done', function(result) {
console.log(result); // true
});
Most of the people let node.js server(s) silently swallow up the errors.
In case of database connection ( pool ) gets closed for some reason this will result in constant propagation of
errors, meaning that server will be running but it will not reconnect to db.
In case of an " uncaughtException " it is good to restart the server and return it to its initial state, where we know it
will work. Exception is logged, application is terminated but since it will be running in a container that will make
sure that the server is running we will achieve restarting of the server ( returning to the initial working state ) .
Installing the forever ( or other CLI tool to make sure that node server runs continuously )
Reason why is it started and why we use forever is after the server is terminated forever process will
start the server again.
On a side note there was a way also to handle exceptions with Clusters and Domains.
currently, errors thrown in a promise that are not caught results in the error being swallowed, which can make it
difficult to track down the error. This can be solved using linting tools like eslint or by ensuring you always have a
catch clause.
Navigate to the directory in which your nodejs script resides and run the following command each time you want to
start a nodejs instance to be monitored by pm2:
pm2 list
pm2 monit
pm2 kill
7. As opposed to restart, which kills and restarts the process, reload achieves a 0-second-downtime reload
8. View logs
$ forever list
info: Forever processes running
$ forever stop 0
Windows 10:
3. Click Node.js.
4. Click Uninstall.
5. Click the new Uninstall button.
Windows 7-8.1:
command -v nvm
nvm ls-remote
For example
nvm ls
$ nvm ls
v4.3.0
v5.5.0
You can install Node Version Manager using git, curl or wget. You run these commands in Terminal on Mac OSX.
curl example:
wget example:
To test that nvm was properly installed, close and re-open Terminal and enter nvm. If you get a nvm: command not
found message, your OS may not have the necessary .bash_profile file. In Terminal, enter touch ~/.bash_profile
and run the above install script again.
If you still get nvm: command not found, try the following:
In Terminal, enter nano .bashrc. You should see an export script almost identical to the following:
source ~/.nvm/nvm.sh
nvm ls
v4.5.0
v6.7.0
using alias
nvm run default --version or nvm exec default node --version
Running node v6.7.0 (npm v3.10.3)
v6.7.0
Version Switching
A proper usecase would be, if you want to set some other version than stable version as default alias. default
aliased versions are loaded on console by default.
Like:
Note:
http.createServer(handler).listen(httpPort, start_callback);
function start_callback(){
console.log('Start HTTP on port ' + httpPort)
}
node http_server.js
now you need to test your server, you need to open your internet browser and navigate to this url:
http://127.0.0.1:80
if your machine running Linux server you can test it like this:
curl 127.0.0.1:80
ok
in your console, that running the app, you will see this results:
var options = {
hostname: '127.0.0.1',
port: 80,
path: '/',
method: 'GET'
};
req.on('error', function(e) {
console.log('problem with request: ' + e.message);
});
req.end();
node http_client.js
This code works but it's bulky and buffers up the entire data.txt file into memory for every request before writing
the result back to clients. If data.txt is very large, your program could start eating a lot of memory as it serves lots of
users concurrently, particularly for users on slow connections.
The user experience is poor too because users will need to wait for the whole file to be buffered into memory on
your server before they can start receiving any contents.
Luckily both of the (req, res) arguments are streams, which means we can write this in a much better way using
fs.createReadStream() instead of fs.readFile():
Here .pipe() takes care of listening for 'data' and 'end' events from the fs.createReadStream(). This code is not only
cleaner, but now the data.txt file will be written to clients one chunk at a time immediately as they are received
from the disk.
var fs = require('fs')
When writable streams are also readable streams, i.e. when they're duplex streams, you can continue piping it to
other writable streams.
fs.createReadStream('style.css')
.pipe(zlib.createGzip()) // The returned object, zlib.Gzip, is a duplex stream.
.pipe(fs.createWriteStream('style.css.gz')
Note that you must pipe to the output streams synchronously (at the same time) before any data 'flows'. Failure to
do so might lead to incomplete data being streamed.
Also note that stream objects can emit error events; be sure to responsibly handle these events on every stream, as
needed:
To create Stream object we need to use the stream module provided by NodeJs
var fs = require("fs");
var stream = require("stream").Writable;
/*
* Implementing the write function in writable stream class.
* This is the function which will be used when other stream is piped into this
* writable stream.
*/
stream.prototype._write = function(chunk, data){
console.log(data);
}
fs.createReadStream("am1.js").pipe(customStream);
This will give us our own custom writable stream. we can implement anything within the _write function. Above
method works in NodeJs 4.x.x version but in NodeJs 6.x ES6 introduced classes therefore syntax have changed.
Below is the code for 6.x version of NodeJs
The first one, which uses an async method for reading a file, and providing a callback function which is called once
the file is fully read into the memory:
And the second, which uses streams in order to read the file's content, piece by piece:
fileStream.on('end', () => {
console.log(fileContent);
})
It's worth mentioning that both examples do the exact same thing. What's the difference then?
When the files you deal with are small then there is no real effect when using streams, but what happens when the
file is big? (so big that it takes 10 seconds to read it into memory)
Without streams you'll be waiting, doing absolutely nothing (unless your process does other stuff), until the 10
seconds pass and the file is fully read, and only then you can start processing the file.
With streams, you get the file's contents piece by piece, right when they're available - and that lets you process
the file while it is being read.
The above example does not illustrate how streams can be utilized for work that cannot be done when going the
I would like to download a gzip file, unzip it and save its content to the disk. Given the file's url this is what's need
to be done:
Here's a [small file][1], which is stored in my S3 storage. The following code does the above in the callback fashion.
// 1339 milliseconds
// 1204 milliseconds
Yep, it's not faster when dealing with small files - the tested file weights 80KB. Testing this on a bigger file, 71MB
gzipped (382MB unzipped), shows that the streams version is much faster
It took 20925 milliseconds to download 71MB, unzip it and then write 382MB to disk - using the callback
fashion.
In comparison, it took 13434 milliseconds to do the same when using the streams version (35% faster, for a
not-so-big file)
Runtime flags
Any code running in your application (including external modules) can check the value of NODE_ENV:
Dependencies
When the NODE_ENV environment variable is set to 'production' all devDependencies in your package.json file will be
completely ignored when running npm install. You can also enforce this with a --production flag:
Windows :
set NODE_ENV=production
export NODE_ENV=production
This sets NODE_ENV for current bash session thus any apps started after this statement will have NODE_ENV set to
production.
This will set NODE_ENV for the current app only. This helps when we want to test our apps on different
environments.
This uses the idea explained here. Refer this post for more detailed explanation.
Basically you create .env file and run some bash script to set them on environment.
To avoid writing a bash script, the env-cmd package can be used to load the environment variables defined in the
This package allows environment variables to be set in one way for every platform.
After installing it with npm, you can just add it to your deployment script in package.json as follows:
Installing PM2:
Process can be started in cluster mode involving integrated load balancer to spread load between processes:
pm2 start app.js -i 0 --name "api" (-i is to specify number of processes to spawn. If it is 0, then process
number will be based on CPU cores count)
While having multiple users in production, its must to have a single point for PM2. Therefore pm2 command must
be prefixed with a location (for PM2 config) else it will spawn a new pm2 process for every user with config in
respective home directory. And it will be inconsistent.
Some of the popular process managers made by the node community are forever, pm2, etc.
Forvever
forever is a command-line interface tool for ensuring that a given script runs continuously. forever’s simple
interface makes it ideal for running smaller deployments of Node.js apps and scripts.
Run application :
This starts the server and gives an id for the process(starts from 0).
Restart application :
$ forever restart 0
Stop application :
$ forever stop 0
Similar to restart, 0 is the id the server. You can also give process id or script name in place of the id given by the
forever.
pm2 list
Stop an app:
dev.json
{
"PORT": 3000,
"DB": {
"host": "localhost",
"user": "bob",
"password": "12345"
}
}
qa.json
{
"PORT": 3001,
"DB": {
"host": "where_db_is_hosted",
"user": "bob",
"password": "54321"
}
}
Following code in application will export respective property file which we want to use.
process.argv.forEach(function (val) {
var arg = val.split("=");
if (arg.length > 0) {
if (arg[0] === 'env') {
var env = require('./' + arg[1] + '.json');
exports.prop = env;
}
if (cluster.isMaster) {
// In real life, you'd probably use more than just 2 workers,
// and perhaps not put the master and worker in the same file.
//
// You can also of course get a bit fancier about logging, and
// implement whatever custom logic you need to prevent DoS
// attacks and other bad behavior.
//
// See the options in the cluster documentation.
//
// The important thing is that the master does very little,
// increasing our resilience to unexpected errors.
console.log('your server is working on ' + numCPUs + ' cores');
cluster.on('disconnect', function(worker) {
console.error('disconnect!');
//clearTimeout(timeout);
cluster.fork();
});
} else {
require('./app.js');
If your Node.js application should handle SSL/TLS, it can be secured by loading the key and cert files.
If your certificate provider requires a certificate authority (CA) chain, it can be added in the ca option as an array. A
chain with multiple entries in a single file must be split into multiple files and entered in the same order into the
array as Node.js does not currently support multiple ca entries in one file. An example is provided in the code
below for files 1_ca.crt and 2_ca.crt. If the ca array is required and not set properly, client browsers may display
messages that they could not verify the authenticity of the certificate.
Example
const options = {
key: fs.readFileSync('privatekey.pem'),
cert: fs.readFileSync('certificate.pem'),
ca: [fs.readFileSync('1_ca.crt'), fs.readFileSync('2_ca.crt')]
};
It can happen because cookies are sent with every request to a website - even when those requests come from a
different site.
We can use csurf module for creating csrf token and validating it.
Example
So, when we access GET /form, it will pass the csrf token csrfToken to the view.
Now, inside the view, set the csrfToken value as the value of a hidden input field named _csrf.
form(action="/process" method="post")
input(type="hidden", name="_csrf", value=csrfToken)
span Name:
input(type="text", name="name", required=true)
br
input(type="submit")
1. create the folder where you want to store your key & certificate :
mkdir conf
2. go to that directory :
wget https://raw.githubusercontent.com/anders94/https-authorized-clients/master/keys/ca.cnf
openssl req -new -x509 -days 9999 -config ca.cnf -keyout ca-key.pem -out ca-cert.pem
5. now that we have our certificate authority in ca-key.pem and ca-cert.pem, let's generate a private key for
the server :
wget https://raw.githubusercontent.com/anders94/https-authorized-clients/master/keys/server.cnf
openssl x509 -req -extfile server.cnf -days 999 -passin "pass:password" -in csr.pem -CA ca-
cert.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem
2. update CA store :
sudo update-ca-certificates
const httpsOptions = {
key: fs.readFileSync('path/to/server-key.pem'),
cert: fs.readFileSync('path/to/server-crt.pem')
};
https.createServer(httpsOptions, app).listen(4433);
If you also want to support http requests, you need to make just this small modification:
const httpsOptions = {
key: fs.readFileSync('path/to/server-key.pem'),
cert: fs.readFileSync('path/to/server-crt.pem')
};
http.createServer(app).listen(8888);
https.createServer(httpsOptions, app).listen(4433);
var fs = require('fs');
var http = require('http');
var https = require('https');
var privateKey = fs.readFileSync('sslcert/server.key', 'utf8');
var certificate = fs.readFileSync('sslcert/server.crt', 'utf8');
httpServer.listen(8080);
httpsServer.listen(8443);
In that way you provide express middleware to the native http/https server
Next, create the database schema and the name of the collection:
To check if we have successfully connected to the database, we can use the events open, error from the
mongoose.connection object.
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function() {
// we're connected!
});
Code
Then, add dependencies to server.js, create the database schema and the name of the collection, create an
Express.js server, and connect to MongoDB:
Now add Express.js routes that we will use to query the data:
Model.find({
'request': query
}, function(err, result) {
if (err) throw err;
if (result) {
res.json(result)
} else {
res.send(JSON.stringify({
error : 'Error'
}))
}
})
})
Assume that the following documents are in the collection in the model:
{
"_id" : ObjectId("578abe97522ad414b8eeb55a"),
"request" : "JavaScript is Awesome",
"time" : 1468710551
}
{
"_id" : ObjectId("578abe9b522ad414b8eeb55b"),
"request" : "JavaScript is Awesome",
"time" : 1468710555
}
{
"_id" : ObjectId("578abea0522ad414b8eeb55c"),
"request" : "JavaScript is Awesome",
"time" : 1468710560
}
And that the goal is to find and display all the documents containing only "JavaScript" word under the "request"
key.
To do this, first create a text index for "request" in the collection. For this, add the following code to server.js:
And replace:
Model.find({
'request': query
}, function(err, result) {
With:
Model.find({
$text: {
$search: query
}
}, function(err, result) {
Here, we are using $text and $search MongoDB operators for find all documents in collection collectionName
which contains at least one word from the specified find query.
Usage
http://localhost:8080/find/<query>
Example:
http://localhost:8080/find/JavaScript
Output:
[{
_id: "578abe97522ad414b8eeb55a",
request: "JavaScript is Awesome",
time: 1468710551,
__v: 0
},
{
_id: "578abe9b522ad414b8eeb55b",
request: "JavaScript is Awesome",
time: 1468710555,
__v: 0
},
{
_id: "578abea0522ad414b8eeb55c",
request: "JavaScript is Awesome",
time: 1468710560,
__v: 0
}]
Code
Then, add dependencies to your server.js file, create the database schema and the name of the collection, create
an Express.js server, and connect to MongoDB:
Now add Express.js routes that we will use to write the data:
if(result) {
res.json(result)
}
})
})
Here the query variable will be the <query> parameter from the incoming HTTP request, which will be saved to
MongoDB:
If an error occurs while trying to write to MongoDB, you will receive an error message on the console. If all is
successful, you will see the saved data in JSON format on the page.
//...
if(result) {
res.json(result)
}
})
//...
Now, you need to start MongoDB and run your server.js file using node server.js.
Usage
http://localhost:8080/save/<query>
Example:
http://localhost:8080/save/JavaScript%20is%20Awesome
{
__v: 0,
request: "JavaScript is Awesome",
time: 1469411348,
_id: "57957014b93bc8640f2c78c4"
}
Code
Then, add dependencies to server.js, create the database schema and the name of the collection, create an
Express.js server, and connect to MongoDB:
Now add Express.js routes that we will use to query the data:
Model.find({
'request': query
}, function(err, result) {
if (err) throw err;
if (result) {
res.json(result)
} else {
res.send(JSON.stringify({
error : 'Error'
}))
}
})
})
Assume that the following documents are in the collection in the model:
{
"_id" : ObjectId("578abe97522ad414b8eeb55a"),
"request" : "JavaScript is Awesome",
"time" : 1468710551
}
{
"_id" : ObjectId("578abe9b522ad414b8eeb55b"),
"request" : "JavaScript is Awesome",
"time" : 1468710555
}
{
"_id" : ObjectId("578abea0522ad414b8eeb55c"),
"request" : "JavaScript is Awesome",
"time" : 1468710560
}
And the goal is to find and display all the documents containing "JavaScript is Awesome" under the "request"
key.
For this, start MongoDB and run server.js with node server.js:
Usage
http://localhost:8080/find/<query>
http://localhost:8080/find/JavaScript%20is%20Awesome
Output:
[{
_id: "578abe97522ad414b8eeb55a",
request: "JavaScript is Awesome",
time: 1468710551,
__v: 0
},
{
_id: "578abe9b522ad414b8eeb55b",
request: "JavaScript is Awesome",
time: 1468710555,
__v: 0
},
{
_id: "578abea0522ad414b8eeb55c",
request: "JavaScript is Awesome",
time: 1468710560,
__v: 0
}]
doc.find({'some.value':5},function(err,docs){
//returns array docs
});
doc.findOne({'some.value':5},function(err,doc){
//returns document doc
});
doc.findById(obj._id,function(err,doc){
//returns document doc
});
Mongoose Connection
By default, mongoose adds two new fields into our model, even when those are not defined in the model. Those
fields are:
_id
Mongoose assigns each of your schemas an _id field by default if one is not passed into the Schema constructor.
The type assigned is an ObjectId to coincide with MongoDB's default behavior. If you don't want an _id added to
your schema at all, you may disable it using this option.
__v or versionKey
The versionKey is a property set on each document when first created by Mongoose. This keys value contains the
internal revision of the document. The name of this document property is configurable.
Compound indexes
usersSchema.index({username: 1 });
In these case our model have two more indexes, one for the field username and another for email field. But we can
create compound indexes.
By default, mongoose always call the ensureIndex for each index sequentially and emit an 'index' event on the
model when all the ensureIndex calls succeeded or when there was an error.
In MongoDB ensureIndex is deprecated since 3.0.0 version, now is an alias for createIndex.
Is recommended disable the behavior by setting the autoIndex option of your schema to false, or globally on the
connection by setting the option config.autoIndex to false.
usersSchema.set('autoIndex', false);
Code
Then, add dependencies to server.js, create the database schema and the name of the collection, create an
Express.js server, and connect to MongoDB:
Now add Express.js routes that we will use to query the data:
Model.find({
'request': query
})
.exec() //remember to add exec, queries have a .then attribute but aren't promises
.then(function(result) {
if (result) {
res.json(result)
} else {
next() //pass to 404 handler
}
})
.catch(next) //pass to error handler
})
Assume that the following documents are in the collection in the model:
{
"_id" : ObjectId("578abe97522ad414b8eeb55a"),
"request" : "JavaScript is Awesome",
"time" : 1468710551
}
{
"_id" : ObjectId("578abe9b522ad414b8eeb55b"),
"request" : "JavaScript is Awesome",
"time" : 1468710555
}
{
"_id" : ObjectId("578abea0522ad414b8eeb55c"),
"request" : "JavaScript is Awesome",
"time" : 1468710560
}
And the goal is to find and display all the documents containing "JavaScript is Awesome" under the "request"
key.
For this, start MongoDB and run server.js with node server.js:
Usage
http://localhost:8080/find/<query>
Example:
http://localhost:8080/find/JavaScript%20is%20Awesome
[{
_id: "578abe97522ad414b8eeb55a",
request: "JavaScript is Awesome",
time: 1468710551,
__v: 0
},
{
_id: "578abe9b522ad414b8eeb55b",
request: "JavaScript is Awesome",
time: 1468710555,
__v: 0
},
{
_id: "578abea0522ad414b8eeb55c",
request: "JavaScript is Awesome",
time: 1468710560,
__v: 0
}]
When tasks are finished, async call the main callback with all errors and all results of tasks.
function shortTimeFunction(callback) {
setTimeout(function() {
callback(null, 'resultOfShortTime');
}, 200);
}
function mediumTimeFunction(callback) {
setTimeout(function() {
callback(null, 'resultOfMediumTime');
}, 500);
}
function longTimeFunction(callback) {
setTimeout(function() {
callback(null, 'resultOfLongTime');
}, 1000);
}
async.parallel([
shortTimeFunction,
mediumTimeFunction,
longTimeFunction
],
function(err, results) {
if (err) {
return console.error(err);
}
console.log(results);
});
You can replace the tasks array parameter by an object. In this case, results will be also an object with the same
keys than tasks.
It's very useful to compute some tasks and find easily each result.
async.parallel({
short: shortTimeFunction,
medium: mediumTimeFunction,
long: longTimeFunction
},
function(err, results) {
if (err) {
return console.error(err);
}
Each parallel function is passed a callback. This callback can either return an error as the first argument or success
values after that. If a callback is passed several success values, these results are returned as an array.
async.parallel({
short: function shortTimeFunction(callback) {
setTimeout(function() {
callback(null, 'resultOfShortTime1', 'resultOfShortTime2');
}, 200);
},
medium: function mediumTimeFunction(callback) {
setTimeout(function() {
callback(null, 'resultOfMediumTime1', 'resultOfMeiumTime2');
}, 500);
}
},
function(err, results) {
if (err) {
return console.error(err);
}
console.log(results);
});
Result :
{
short: ["resultOfShortTime1", "resultOfShortTime2"],
medium: ["resultOfMediumTime1", "resultOfMediumTime2"]
}
}, function(err) {
//If any of the user creation failed may throw error.
if( err ) {
// One of the iterations produced an error.
// All processing will now stop.
console.log('unable to create user');
} else {
console.log('All user created successfully');
}
});
When tasks are finished successfully, async call the "master" callback with all errors and all results of tasks.
function shortTimeFunction(callback) {
setTimeout(function() {
callback(null, 'resultOfShortTime');
}, 200);
}
function mediumTimeFunction(callback) {
setTimeout(function() {
callback(null, 'resultOfMediumTime');
}, 500);
}
function longTimeFunction(callback) {
setTimeout(function() {
callback(null, 'resultOfLongTime');
}, 1000);
}
async.series([
mediumTimeFunction,
shortTimeFunction,
longTimeFunction
],
function(err, results) {
if (err) {
return console.error(err);
}
console.log(results);
});
You can replace the tasks array parameter by an object. In this case, results will be also an object with the same
keys than tasks.
async.series({
short: shortTimeFunction,
medium: mediumTimeFunction,
long: longTimeFunction
},
function(err, results) {
if (err) {
return console.error(err);
}
console.log(results);
});
When tasks are finished successfully, async call the "master" callback with all errors and all results of tasks.
function getUserRequest(callback) {
// We simulate the request with a timeout
setTimeout(function() {
var userResult = {
name : 'Aamu'
};
callback(null, userResult);
}, 500);
}
callback(null, friendsResult);
}, 500);
}
async.waterfall([
getUserRequest,
getUserFriendsRequest
],
function(err, results) {
if (err) {
return console.error(err);
console.log(JSON.stringify(results));
});
Result: results contains the second callback parameter of the last function of the waterfall, which is
friendsResult in that case.
This is called in parallel. When we want to call it one at a time, use: async.timesSeries
Output:
server.js:
app.get('/',function(req,res){
res.sendFile(__dirname + "/index.html");
});
app.post('/api/file',function(req,res){
var upload = multer({ storage : storage}).single('userFile');
upload(req,res,function(err) {
if(err) {
return res.end("Error uploading file.");
}
res.end("File is uploaded");
});
});
app.listen(3000,function(){
console.log("Working on port 3000");
});
index.html:
<form id = "uploadForm"
enctype = "multipart/form-data"
action = "/api/file"
method = "post"
>
<input type="file" name="userFile" />
<input type="submit" value="Upload File" name="submit">
Note:
To upload file with extension you can use Node.js path built-in library
and change:
In this example, view how to upload files to allow only certain extensions.
For example only images extensions. Just add to var upload = multer({ storage :
storage}).single('userFile'); fileFilter condition
Now you can upload only image files with png, jpg, gif or jpeg extensions
npm i formidable@latest
http.createServer(function(req, res) {
if (req.url == '/upload' && req.method.toLowerCase() == 'post') {
// parse a file upload
var form = new formidable.IncomingForm();
return;
}
Node.js server
Browser client
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Hello World with Socket.io</title>
</head>
<body>
<script src="https://cdn.socket.io/socket.io-1.4.5.js"></script>
<script>
var socket = io("http://localhost:3000");
socket.on("message-from-server-to-client", function(msg) {
document.getElementById('message').innerHTML = msg;
});
socket.emit('message-from-client-to-server', 'Hello World!');
</script>
<p>Socket.io Hello World client started!</p>
<p id="message"></p>
</body>
</html>
MongoDB.connect('mongodb://localhost:27017/databaseName')
.then(function(database) {
const collection = database.collection('collectionName');
return collection.insert({key: 'value'});
})
.then(function(result) {
console.log(result);
});
```
db.collection(collection).find()
//START SERVER
server.listen(3000, function () {
console.log("Server running");
})
Docs - http://mozilla.github.io/nunjucks/
Install - npm i nunjucks
app.js
// Apply nunjucks and add custom filter and function (for example).
var env = nunjucks.configure(['views/'], { // set folders with templates
autoescape: true,
express: app
});
env.addFilter('myFilter', function(obj, arg1, arg2) {
console.log('myFilter', obj, arg1, arg2);
// Do smth with obj
return obj;
});
env.addGlobal('myFunc', function(obj, arg1) {
console.log('myFunc', obj, arg1);
// Do smth with obj
return obj;
});
app.listen(3000, function() {
console.log('Example app listening on port 3000...');
});
/views/index.html
<html>
<head>
<title>Nunjucks example</title>
</head>
<body>
{% block content %}
{{title}}
/views/foo.html
{% extends "index.html" %}
{# This is comment #}
{% block content %}
<h1>{{title}}</h1>
{# apply custom function and next build-in and custom filters #}
{{ myFunc(smthVar) | lower | myFilter(5, 'abc') }}
{% endblock %}
Node.js provides a build in non graphical debugging utility. To start the build in the debugger, start the application
with this command:
'use strict';
The keyword debugger will stop the debugger at that point in the code.
Command reference
1. Stepping
2. Breakpoints
Once the above commands runs you will see the following output. To exit from the debugger interface, type
process.exit()
Use repl to enter code interactively. The repl mode has the same context as the line you are debugging. This allows
you to examine the contents of variables and test out lines of code. Press Ctrl+C to leave the debug repl.
You can run node's built in v8 inspector! The node-inspector plug-in is not needed anymore.
Simply pass the inspector flag and you'll be provided with a URL to the inspector
node-debug filename.js
http://localhost:8080/debug?port=5858
In this case, start the node inspector on a different port using the following command.
$node-inspector --web-port=6500
}).listen(8125);
console.log('Server running at http://127.0.0.1:8125/');
// Set to true if you need the website to include cookies in the requests sent
// to the API (e.g. in case you use sessions)
response.setHeader('Access-Control-Allow-Credentials', true);
More details on Node and ES6 can be found on their site https://nodejs.org/en/docs/es6/
Since NodeJS v6 there has been pretty good support. So if you using NodeJS v6 or above you can enjoy using ES6.
However, you may also want to use some of the unreleased features and some from beyond. For this you will need
to use a transpiler
It is possible to run a transpiler at run time and build, to use all of the ES6 features and more. The most popular
transpiler for JavaScript is called Babel
Babel allows you to use all of the features from the ES6 specification and some additional not-in-spec features with
'stage-0' such as import thing from 'thing instead of var thing = require('thing')
If we wanted to create a project where we use 'stage-0' features such as import we would need to add Babel as a
transpiler. You'll see projects using react and Vue and other commonJS based patterns implement stage-0 quite
often.
mkdir my-es6-app
cd my-es6-app
npm init
Create a new file called server.js and add a basic HTTP server.
Note that we use an import http from 'http' this is a stage-0 feature and if it works it means we've got the
transpiler working correctly.
If you run node server.js it will fail not knowing how to handle the import.
Creating a .babelrc file in the root of your directory and add the following settings
you can now run the server with node src/index.js --exec babel-node
Finishing off it is not a good idea to run a transpiler at runtime on a production app. We can however implement
some scripts in our package.json to make it easier to work with.
"scripts": {
"start": "node dist/index.js",
"dev": "babel-node src/index.js",
"build": "babel src -d dist",
"postinstall": "npm run build"
},
The above will on npm install build the transpiled code to the dist directory allow npm start to use the transpiled
code for our production app.
npm run dev will boot the server and babel runtime which is fine and preferred when working on a project locally.
Going one further you could then install nodemon npm install nodemon --save-dev to watch for changes and
then reboot the node app.
This really speeds up working with babel and NodeJS. In you package.json just update the "dev" script to use
nodemon
Prerequisites:
1. Check out the new es6 features at http://es6-features.org - it may clarify to you if you really intend to use it
on your next NodeJS app
Here is a very short sample of a simple hello world app with JS es6
'use strict'
class Program
{
constructor()
{
this.message = 'hello es6 :)';
}
print()
{
setTimeout(() =>
this.print();
}, Math.random() * 1000);
}
}
new Program().print();
You can run this program and observe how it print the same message over and over again.
'use strict'
This line is actually required if you intend to use js es6. strict mode, intentionally, has different semantics from
normal code (please read more about it on MDN -
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode)
class Program
Unbelievable - a class keyword! Just for a quick reference - before es6 the only way do define a class in js was with
the... function keyword!
var myClassObject = new MyClass(); // generating a new object with a type of MyClass
When using OOP, a class is a very fundamental ability which assist the developer to represent a specific part of a
system (breaking down code is crucial when the code is getting larger.. for instance: when writing server-side code)
constructor()
{
this.message = 'hello es6 :)';
}
You got to admit - this is pretty intuitive! This is the c'tor of my class - this unique "function" will occur every time an
object is created from this particular class (in our program - only once)
print()
{
setTimeout(() => // this is an 'arrow' function
{
console.log(this.message);
this.print(); // here we call the 'print' method from the class template itself (a recursion
in this particular case)
}, Math.random() * 1000);
}
Because print is defined in the class scope - it is actually a method - which can be invoked from either the object of
new Program().print();
In conclusion: JS es6 can simplify your code - make it more intuitive and easy to understand (comparing with the
previous version of JS).. you may try to re-write an existing code of yours and see the difference for yourself
ENJOY :)
Similar to the browser environment of JavaScript node.js provides a console module which provides simple logging
and debugging possibilities.
The most important methods provided by the console module are console.log, console.error and console.time.
But there are several others like console.info.
console.log
The parameters will be printed to the standard output (stdout) with a new line.
console.log('Hello World');
console.error
The parameters will be printed to the standard error (stderr) with a new line.
console.time, console.timeEnd
console.time starts a timer with an unique lable that can be used to compute the duration of an operation. When
you call console.timeEnd with the same label, the timer stops and it prints the elapsed time in milliseconds to
stdout.
Process Module
It is possible to use the process module to write directly into the standard output of the console. Therefore it
exists the method process.stdout.write. Unlike console.log this method does not add a new line before your
output.
So in the following example the method is called two times, but no new line is added in between their outputs.
Formatting
One can use terminal (control) codes to issue specific commands like switching colors or positioning the cursor.
console.log(results.rows);
});
Following example creates a simple GET api for listing all users.
Example
// GET /api/users
app.get('/api/users', function(req, res){
return res.json(users); //return response as JSON
});
app.listen('3000', function(){
console.log('Server listening on port 3000');
});
Example
// GET /api/users
app.get('/api/users', function(req, res){
return res.json(users);
});
/* POST /api/users
app.listen('3000', function(){
console.log('Server listening on port 3000');
});
process.on('SIGTERM', function () {
server.close(function () {
process.exit(0);
});
});
IISNode doesn't provide direct support for Virtual Directories or Nested Applications via configuration so to achieve
this we'll need to take advantage of a feature of IISNode that isn't part of the configuration and is much lesser
known. All children of the <appSettings> element with the Web.config are added to the process.env object as
properties using the appSetting key.
<appSettings>
<add key="virtualDirPath" value="/foo" />
</appSettings>
Now that we can use the <appSettings> element for configuration, lets take advantage of that and use it in our
server code.
// Public Directory
server.use(express.static(path.join(virtualDirPath, 'public')));
// Bower
server.use('/bower_components', express.static(path.join(virtualDirPath, 'bower_components')));
// Public Directory
server.use(express.static(path.join(virtualDirPath, 'public')));
// Bower
server.use('/bower_components', express.static(path.join(virtualDirPath, 'bower_components')));
server.listen(port, () => {
console.log(`Listening on ${port}`);
});
IISNode will handle scaling over multiple cores, process manageement of node.exe, and auto-recycle your IIS
Application whenever your app is updated, just to name a few of its benefits.
Requirements
IISNode does have a few requirements before you can host your Node.js app in IIS.
1. Node.js must be installed on the IIS host, 32-bit or 64-bit, either are supported.
2. IISNode installed x86 or x64, this should match the bitness of your IIS Host.
3. The Microsoft URL-Rewrite Module for IIS installed on your IIS host.
This is key, otherwise requests to your Node.js app won't function as expected.
4. A Web.config in the root folder of your Node.js app.
5. IISNode configuration via an iisnode.yml file or an <iisnode> element within your Web.config.
Project Strucure
This is the basic project structure of a IISNode/Node.js Web app. It looks almost identical to any non-IISNode Web
App except for the addition of the Web.config.
- /app_root
- package.json
- server.js
server.listen(port, () => {
console.log(`Listening on ${port}`);
});
The Web.config is just like any other IIS Web.config except the following two things must be present, URL
<rewrite><rules> and an IISNode <handler>. Both of these elements are children of the <system.webServer>
element.
Configuration
You can configure IISNode by using a iisnode.yml file or by adding the <iisnode> element as a child of
<system.webServer> in your Web.config. Both of these configuration can be used in conjunction with one another
however, in this case, Web.config will need to specify the iisnode.yml file AND any configuration conflicts will be
take from the iisnode.yml file instead. This configuration overriding cannot happen the other way around.
IISNode Handler
In order for IIS to know that server.js contains our Node.js Web App we need to explicitly tell it that. We can do
this by adding the IISNode <handler> to the <handlers> element.
<handlers>
<add name="iisnode" path="server.js" verb="*" modules="iisnode"/>
</handlers>
URL-Rewrite Rules
The final part of the configuration is ensuring that traffic intended for our Node.js app coming into our IIS instance
is being directed to IISNode. Without URL rewrite rules, we would need to visit our app by going to
http://<host>/server.js and even worse, when trying to request a resource supplied by server.js you'll get a
404. This is why URL rewriting is necessary for IISNode web apps.
<rewrite>
<rules>
<!-- First we consider whether the incoming URL matches a physical file in the /public folder
-->
<rule name="StaticContent" patternSyntax="Wildcard">
<action type="Rewrite" url="public/{R:0}" logRewrittenUrl="true"/>
<conditions>
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true"/>
</conditions>
<match url="*.*"/>
</rule>
This is a working Web.config file for this example, setup for a 64-bit Node.js install.
That's it, now visit your IIS Site and see your Node.js application working.
Since Socket.io sends requests starting with /socket.io, IISNode needs to communicate to IIS that these should
also be handled IISNode and aren't just static file requests or other traffic. This requires a different <handler> than
standard IISNode apps.
<handlers>
<add name="iisnode-socketio" path="server.js" verb="*" modules="iisnode" />
</handlers>
In addition to the changes to the <handlers> we also need to add an additional URL rewrite rule. The rewrite rule
sends all /socket.io traffic to our server file where the Socket.io server is running.
If you are using IIS 8, you'll need to disable your webSockets setting in your Web.config in addition to adding the
above handler and rewrite rules. This is unnecessary in IIS 7 since there is no webSocket support.
-h, --help
Added in: v0.1.3 Print node command line options. The output of this option is less detailed than this document.
Added in: v0.5.2 Evaluate the following argument as JavaScript. The modules which are predefined in the REPL can
also be used in script.
-c, --check
-i, --interactive
Added in: v0.7.7 Opens the REPL even if stdin does not appear to be a terminal.
Follows require()'s module resolution rules. module may be either a path to a file, or a node module name.
--no-deprecation
--trace-deprecation
--throw-deprecation
--no-warnings
--trace-warnings
--trace-sync-io
Added in: v2.1.0 Prints a stack trace whenever synchronous I/O is detected after the first turn of the event loop.
--zero-fill-buffers
Added in: v6.0.0 Automatically zero-fills all newly allocated Buffer and SlowBuffer instances.
--preserve-symlinks
Added in: v6.3.0 Instructs the module loader to preserve symbolic links when resolving and caching modules.
By default, when Node.js loads a module from a path that is symbolically linked to a different on-disk location,
Node.js will dereference the link and use the actual on-disk "real path" of the module as both an identifier and as a
root path to locate other dependency modules. In most cases, this default behavior is acceptable. However, when
using symbolically linked peer dependencies, as illustrated in the example below, the default behavior causes an
exception to be thrown if moduleA attempts to require moduleB as a peer dependency:
{appDir}
├── app
│ ├── index.js
│ └── node_modules
│ ├── moduleA -> {appDir}/moduleA
│ └── moduleB
│ ├── index.js
│ └── package.json
└── moduleA
├── index.js
└── package.json
The --preserve-symlinks command line flag instructs Node.js to use the symlink path for modules as opposed to the
real path, allowing symbolically linked peer dependencies to be found.
Note, however, that using --preserve-symlinks can have other side effects. Specifically, symbolically linked native
modules can fail to load if those are linked from more than one location in the dependency tree (Node.js would see
those as two separate modules and would attempt to load the module multiple times, causing an exception to be
thrown).
--track-heap-objects
Added in: v2.4.0 Track heap object allocations for heap snapshots.
--prof-process
Added in: v6.0.0 Process v8 profiler output generated using the v8 option --prof.
--v8-options
Note: v8 options allow words to be separated by both dashes (-) or underscores (_).
Added in: v4.0.0 Specify an alternative default TLS cipher list. (Requires Node.js to be built with crypto support.
(Default))
--enable-fips
Added in: v6.0.0 Enable FIPS-compliant crypto at startup. (Requires Node.js to be built with ./configure --openssl-
fips)
--force-fips
Added in: v6.0.0 Force FIPS-compliant crypto on startup. (Cannot be disabled from script code.) (Same requirements
as --enable-fips)
--icu-data-dir=file
Added in: v0.11.15 Specify ICU data load path. (overrides NODE_ICU_DATA)
Environment Variables
NODE_DEBUG=module[,…]
Added in: v0.1.32 ','-separated list of core modules that should print debug information.
NODE_PATH=path[:…]
Added in: v0.1.32 ':'-separated list of directories prefixed to the module search path.
NODE_DISABLE_COLORS=1
Added in: v0.3.0 When set to 1 colors will not be used in the REPL.
NODE_ICU_DATA=file
Added in: v0.11.15 Data path for ICU (Intl object) data. Will extend linked-in data when compiled with small-icu
support.
NODE_REPL_HISTORY=file
Added in: v5.0.0 Path to the file used to store the persistent REPL history. The default path is ~/.node_repl_history,
which is overridden by this variable. Setting the value to an empty string ("" or " ") disables persistent REPL history.
app.listen(3000, function () {
console.log('Example app listening on port 3000!');
});
Koa
var koa = require('koa');
var app = koa();
app.use(function *(next){
var start = new Date;
yield next;
var ms = new Date - start;
console.log('%s %s - %s', this.method, this.url, ms);
});
app.use(function *(){
this.body = 'Hello World';
});
app.listen(3000);
program
.version('0.0.1')
program
.command('hi')
.description('initialize project configuration')
.action(function(){
console.log('Hi my Friend!!!');
});
program
.command('bye [name]')
.description('initialize project configuration')
.action(function(name){
console.log('Bye ' + name + '. It was good to see you!');
});
program
.command('*')
.action(function(env){
console.log('Enter a Valid command');
program.parse(process.argv);
Vorpal.js
const vorpal = require('vorpal')();
vorpal
.command('foo', 'Outputs "bar".')
.action(function(args, callback) {
this.log('bar');
callback();
});
vorpal
.delimiter('myapp$')
.show();
In order to get started, you'll want to install Grunt's command line interface (CLI) globally.
Preparing a new Grunt project: A typical setup will involve adding two files to your project: package.json and the
Gruntfile.
package.json: This file is used by npm to store metadata for projects published as npm modules. You will list grunt
and the Grunt plugins your project needs as devDependencies in this file.
Gruntfile: This file is named Gruntfile.js and is used to configure or define tasks and load Grunt plugins.
Example package.json:
{
"name": "my-project-name",
"version": "0.1.0",
"devDependencies": {
"grunt": "~0.4.5",
"grunt-contrib-jshint": "~0.10.0",
"grunt-contrib-nodeunit": "~0.4.1",
"grunt-contrib-uglify": "~0.5.0"
}
}
Example gruntfile:
module.exports = function(grunt) {
// Project configuration.
grunt.initConfig({
pkg: grunt.file.readJSON('package.json'),
uglify: {
options: {
banner: '/*! <%= pkg.name %> <%= grunt.template.today("yyyy-mm-dd") %> */\n'
},
build: {
src: 'src/<%= pkg.name %>.js',
dest: 'build/<%= pkg.name %>.min.js'
}
}
});
// Default task(s).
grunt.registerTask('default', ['uglify']);
};
To use a gruntplugin, you first need to add it as a dependency to your project. Let's use the jshint plugin as an
example.
The --save-dev option is used to add the plugin in the package.json, this way the plugin is always installed after a
npm install.
You can load your plugin in the gruntfile file using loadNpmTasks.
grunt.loadNpmTasks('grunt-contrib-jshint');
You configure the task in the gruntfile adding a property called jshint to the object passed to grunt.initConfig.
grunt.initConfig({
jshint: {
all: ['Gruntfile.js', 'lib/**/*.js', 'test/**/*.js']
}
});
Don't forget you can have other properties for other plugins you are using.
To just run the task with the plugin you can use the command line.
grunt jshint
grunt.registerTask('default', ['jshint']);
The default task runs with the grunt command in the terminal without any options.
"dependencies": {
"ws": "*"
},
var ws = require('ws');
ws.send('something');
});
Create a file called build.js at the root of your project folder, containing the following:
Metalsmith(__dirname)
.use(inPlace('handlebars'))
.build(function(err) {
if (err) throw err;
console.log('Build finished!');
});
Create a folder called src at the root of your project folder. Create index.html in src, containing the following:
---
title: My awesome blog
---
{{ title }}
Running node build.js will now build all files in src. After running this command, you'll have index.html in your
build folder, with the following contents:
options
.option("-v, --verbose", "Be verbose");
options
.command("convert")
.alias("c")
.description("Converts input file to output file")
.option("-i, --in-file <file_name>", "Input file")
.option("-o, --out-file <file_name>", "Output file")
.action(doConvert);
options.parse(process.argv);
if (!options.args.length) options.help();
function doConvert(options){
//do something with options.inFile and options.outFile
};
options
.option("-v, --verbose")
.parse(process.argv);
if (options.verbose){
console.log("Let's make some noise!");
}
//test: the text within brackets should appear when clicking on said button
//window.alert('You clicked on me. - jQuery');
//if the 'nick' member of the JSON does not equal to the predeclared string
(as it was initialized), then the backend script was executed, meaning that communication has been
established
if(data.Nick != predeclared){
document.getElementById("modify").innerHTML = "JSON changed!\n" +
jsonstr;
};
}
});
});
});
//'domaintest_route.js'
//an Express router listening to GET requests - in this case, it's empty, meaning that nothing is
displayed when you reach 'localhost/domaintest'
router.get('/', function(req, res, next) {
});
//same for POST requests - notice, how the AJAX request above was defined as POST
router.post('/', function(req, res) {
res.setHeader('Content-Type', 'application/json');
res.send(sent_data);
});
module.exports = router;
Build small and single purpose modules not in term of code size only, but also in term of scope that serves a single
purpose
a - "Small is beautiful"
b - "Make each program do one thing well."
The Reactor Pattern is the heart of the node.js asynchronous nature. Allowed the system to be implemented as a
single-threaded process with a series of event generators and event handlers, with the help of event loop that runs
continuously.
eventEmitter.emit('doorOpen');
db.close();
});
myNewDB is DB name, if it does not exists in database then it will create automatically with this call.
Note: this is "ready to run" example. Just, don't forget to get jQuery and install the required modules.
Project structure:
project
│ package.json
│ index.html
│
├───js
│ main.js
│ jquery-1.12.0.min.js
│
└───srv
│ app.js
├─── models
│ task.js
└─── tasks
data-processor.js
app.js:
app.use(express.static(__dirname + '/../'));
t.save(function(err, task){
taskProcessor.send(params);
response.status(200).json(task);
});
});
mongoose.connect('mongodb://localhost/test');
http.listen('1234');
task.js:
mongoose.model('Task', taskSchema);
module.exports = mongoose.model('Task');
data-processor.js:
process.on('message', function(msg){
init = function(){
processData(msg.message);
}.bind(this)();
function processData(message){
//send status update to the main app
process.send({ status: 'We have started processing your data.' });
process.on('uncaughtException',function(err){
console.log("Error happened: " + err.message + "\n" + err.stack + ".\n");
console.log("Gracefully finish the routine.");
});
index.html:
<!DOCTYPE html>
<html>
<head>
<script src="./js/jquery-1.12.0.min.js"></script>
<script src="./js/main.js"></script>
</head>
<body>
<p>Example of processing long-running node requests.</p>
<button id="go" type="button">Run</button>
<br />
<p>Log:</p>
<textarea id="log" rows="20" cols="50"></textarea>
</body>
</html>
main.js:
$(document).on('ready', function(){
$('#go').on('click', function(e){
//clear log
$("#log").val('');
if(response.status != 'Done!'){
checkTaskTimeout = setTimeout(updateStatus, 500);
}
});
}
package.json:
{
"name": "nodeProcessor",
"dependencies": {
"body-parser": "^1.15.2",
"express": "^4.14.0",
"html": "0.0.10",
"mongoose": "^4.5.5"
}
}
Disclaimer: this example is intended to give you basic idea. To use it in production environment, it needs
improvements.
6. Open Your favorite code editor and add the following code :
'use strict';
module.exports = app
PS : I'm using here a special hack in order to make Socket.io works with Express because simply it doesn't
work outside of the box.
Now Create a .json file and name it : Manifest.json, open it and past the following :
{
"name": "Application Name",
"gcm_sender_id": "GCM Project ID"
}
1. I seted up and sent a normal index.html page that will use socket.io also.
2. I'm listening on a connection event fired from the front-end aka my index.html page (it will be fired once a
new client successfully connected to our pre-defined link)
3. I'm sending a special token know's as the registration token from my index.html via socket.io new_user
event, such token will be our user unique passcode and each code is generated usually from a supporting
browser for the Web notification API (read more here.
4. I'm simply using the node-gcm module to send my notification which will be handled and shown later on
using Service Workers`.
This is from NodeJS point of view. in other examples I will show how we can send custom data, icons ..etc in our
push message.
When your node process starts up you should see the message
Then you set up the remote debugging target in your specific IDE.
Once those are configured simply run the debug target as you normally would and it will stop on your breakpoints.
package.json
"dependencies": {
"mongoose": "^4.5.5",
}
server.js (ECMA 6)
mongoose.connect('mongodb://localhost:27017/stackoverflow-example');
const db = mongoose.connection;
db.on('error', console.error.bind(console, 'DB connection error!'));
mongoose.connect('mongodb://localhost:27017/stackoverflow-example');
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'DB connection error!'));
app/models/user.js (ECMA 6)
user.save((err) => {
if (err) throw err;
console.log('User saved!');
});
ECMA5.1:
user.save(function (err) {
if (err) throw err;
console.log('User saved!');
});
User.findOne({
name: 'stack'
}, (err, user) => {
if (err) throw err;
if (!user) {
console.log('No user was found');
} else {
console.log('User was found');
}
});
ECMA5.1:
User.findOne({
name: 'stack'
}, function (err, user) {
if (err) throw err;
if (!user) {
console.log('No user was found');
} else {
console.log('User was found');
}
//Import Libraries
var express = require('express'),
session = require('express-session'),
mongoose = require('mongoose'),
request = require('request');
//Connect to Mongo DB
mongoose.connect(config.getDBString());
//Configure Routes
app.use(config.API_PATH, userRoutes());
config.js: This file will manage all the configuration related params which will remain same throughout.
var config = {
VERSION: 1,
BUILD: 1,
URL: 'http://127.0.0.1',
API_PATH : '/api',
PORT : process.env.PORT || 8080,
DB : {
//MongoDB configuration
HOST : 'localhost',
PORT : '27017',
DATABASE : 'db'
},
/*
* Get DB Connection String for connecting to MongoDB database
*/
getDBString : function(){
return 'mongodb://'+ this.DB.HOST +':'+ this.DB.PORT +'/'+ this.DB.DATABASE;
},
/*
* Get the http URL
module.exports = config;
//Create a User
create: function(req, res){
var repassword = req.body.repassword;
var password = req.body.password;
var userEmail = req.body.email;
//Create User
var user = new User();
user.name = req.body.name;
user.email = req.body.email;
user.password = passwordHash;
user.dob = Date.parse(req.body.dob) || "";
user.gender = req.body.gender;
module.exports = UserController;
router.route('/users')
.post(UserController.create);
return router;
module.exports = UserRoutes;
The above example may appear too big but if a beginner at node.js with a little blend of express knowledge tries to
go through this will find it easy and really helpful.
Bad way:
Router.route('/')
.get((req, res) => {
Request.find((err, r) => {
if(err){
console.log(err)
} else {
res.json(r)
}
})
})
.post((req, res) => {
const request = new Request({
type: req.body.type,
info: req.body.info
});
request.info.user = req.user._id;
console.log("ABOUT TO SAVE REQUEST", request);
request.save((err, r) => {
if (err) {
res.json({ message: 'there was an error saving your r' });
} else {
res.json(r);
}
});
});
Better way:
Router.route('/')
.get((req, res) => {
Request.find((err, r) => {
if(err){
console.log(err)
} else {
return next(err)
}
})
})
.post((req, res) => {
const request = new Request({
type: req.body.type,
info: req.body.info
});
request.info.user = req.user._id;
console.log("ABOUT TO SAVE REQUEST", request);
request.save((err, r) => {
if (err) {
return next(err)
} else {
Folder structure
project root
| server.js
|____views
| index.html
| page1.html
server.js
var express = require('express');
var path = require('path');
var app = express();
app.listen(8080);
Note that sendFile() just streams a static file as response, offering no opportunity to modify it. If you are serving
an HTML file and want to include dynamic data with it, then you will need to use a template engine such as Pug,
Mustache, or EJS.
// When a client requests a connection with the server, the server creates a new
// socket dedicated to that client.
server.on('connection', function(socket) {
console.log('A new connection has been established.');
// Now that a TCP connection has been established, the server can send data to
// the client by writing to its socket.
socket.write('Hello, client.');
// The server can also receive data from the client by reading from its socket.
socket.on('data', function(chunk) {
console.log(`Data received from client: ${chunk.toString()`.});
});
// When the client requests to end the TCP connection with the server, the server
// ends the connection.
socket.on('end', function() {
console.log('Closing connection with the client');
});
// The client can also receive data from the server by reading from its socket.
client.on('data', function(chunk) {
console.log(`Data received from the server: ${chunk.toString()}.`);
// Request an end to the connection after the data has been received.
client.end();
});
client.on('end', function() {
console.log('Requested an end to the TCP connection');
});
require('./hello')((err, xml) {
if (err)
throw err;
console.log(err);
})
Promise.promisifyAll(fs)
// now you can use promise based methods on 'fs' with the Async suffix
fs.readFileAsync('file.txt').then(contents => {
console.log(contents)
}).catch(err => {
console.error('error reading', err)
})
Example of filter:
Example of reduce:
return data.toString().toUpperCase()
})
promiseReturningFunction('file.txt').then(console.log)
This works by using the await keyword to suspend the state of an async function, until the resolution of a promise,
and using the async keyword to declare such async functions, which return a promise.
function myAsyncFunction() {
return aFunctionThatReturnsAPromise()
// doSomething is a sync function
.then(result => doSomething(result))
.catch(handleError);
}
So here is when Async/Await enter in action in order to get cleaner our function:
try {
result = await aFunctionThatReturnsAPromise();
} catch (error) {
handleError(error);
}
So the keyword async would be similar to write return new Promise((resolve, reject) => {...}.
Here I leave a pretty brief gif that will not left any doubt in mind after seeing it:
GIF
try {
try {
const sql = `SELECT f.id, f.width, f.height, f.code, f.filename
FROM flags f
WHERE f.id = ?
LIMIT 1`;
const flags = await connection.query(sql, req.params.id);
if (flags.length === 0)
return res.status(404).send({ message: 'flag not found' });
} finally {
pool.releaseConnection(connection);
}
} catch (err) {
// handle errors here
}
});
try{
await User.findByIdAndUpdate(user._id, {
$push: {
tokens: token
}
}).exec()
}catch(e){
handleError(e)
}
getTemperature(function(temp) {
But there were a few really frustrating issues with callbacks so we all started using promises.
getTemperature()
.then(temp => console.log(`the temp is ${temp}`))
.then(() => getAirPollution())
.then(pollution => console.log(`and the pollution is ${pollution}`))
// the temp is 32
// and the pollution is 0.5
This was a bit better. Finally, we found async/await. Which still uses promises under the hood.
app.listen(8080)
{
var x = 1 // will escape the scope
let y = 2 // bound to lexical scope
const z = 3 // bound to lexical scope, constant
}
console.log(x) // 1
console.log(y) // ReferenceError: y is not defined
console.log(z) // ReferenceError: z is not defined
Run in RunKit
performSomething(result => {
this.someVariable = result
})
vs
performSomething(function(result) {
this.someVariable = result
}.bind(this))
Run in RunKit
The function passed to .map can also be written as arrow function by removing the function keyword and instead
adding the arrow =>:
However, this can be written even more concise. If the function body consists of only one statement and that
statement computes the return value, the curly braces of wrapping the function body can be removed, as well as
the return keyword.
Run in RunKit
const b = 3;
let c = [1,2,3,,{}];
let d = 3;
For Jade/PugJS:
extends layout
script(src="http://code.jquery.com/jquery-3.1.0.min.js")
script(src="/magic.js")
h1 Quote: !{quote}
form(method="post" id="changeQuote")
input(type='text', placeholder='Set quote of the day', name='quote')
input(type="submit", value="Save")
For EJS:
<script src="http://code.jquery.com/jquery-3.1.0.min.js"></script>
<script src="/magic.js"></script>
<h1>Quote: <%=quote%> </h1>
<form method="post" id="changeQuote">
<input type="text" placeholder="Set quote of the day" name="quote"/>
<input type="submit" value="Save">
</form>
$(document).ready(function(){
$("form#changeQuote").on('submit', function(e){
e.preventDefault();
var data = $('input[name=quote]').val();
$.ajax({
type: 'post',
url: '/ajax',
data: data,
dataType: 'text'
})
And there you have it! When you click Save the quote will change!
if (!range) {
return res.sendStatus(416);
res.writeHead(206, {
'Transfer-Encoding': 'chunked',
"Accept-Ranges": "bytes",
"Content-Length": chunksize,
"Content-Type": mime.lookup(req.params.filename)
});
.on('end', function () {
console.log('Stream Done');
})
res.end(err);
})
});
Finally, the .pipe call lets node.js know to keep a connection open with the server and to send additional chunks as
needed.
res.contentType('flv');
.preset('flashvideo')
.on('end', function () {
console.log('Stream Done');
})
res.send(err.message);
})
Once you have installed node_redis module you are good to go. Let’s create a simple file, app.js, and see how to
connect with Redis from Node.js.
app.js
By default, redis.createClient() will use 127.0.0.1 and 6379 as the hostname and port respectively. If you have a
different host/port you can supply them as following:
Now, you can perform some action once a connection has been established. Basically, you just need to listen for
connect events as shown below.
client.on('connect', function() {
console.log('connected');
});
client.on('connect', function() {
console.log('connected');
});
Now, type node app in the terminal to run the app. Make sure your Redis server is up and running before running
this snippet.
Storing Strings
All the Redis commands are exposed as different functions on the client object. To store a simple string use the
following syntax:
Or
client.set(['framework', 'AngularJS']);
The above snippets store a simple string AngularJS against the key framework. You should note that both the
snippets do the same thing. The only difference is that the first one passes a variable number of arguments while
the later passes an args array to client.set() function. You can also pass an optional callback to get a notification
when the operation is complete:
If the operation failed for some reason, the err argument to the callback represents the error. To retrieve the value
of the key do the following:
client.get() lets you retrieve a key stored in Redis. The value of the key can be accessed via the callback
argument reply. If the key doesn’t exist, the value of reply will be empty.
Storing Hash
Many times storing simple values won’t solve your problem. You will need to store hashes (objects) in Redis. For
that you can use hmset() function as following:
The above snippet stores a hash in Redis that maps each technology to its framework. The first argument to
hmset() is the name of the key. Subsequent arguments represent key-value pairs. Similarly, hgetall() is used to
retrieve the value of the key. If the key is found, the second argument to the callback will contain the value which is
an object.
Note that Redis doesn’t support nested objects. All the property values in the object will be coerced into strings
before getting stored. You can also use the following syntax to store objects in Redis:
client.hmset('frameworks', {
'javascript': 'AngularJS',
'css': 'Bootstrap',
'node': 'Express'
});
An optional callback can also be passed to know when the operation is completed.
All the functions (commands) can be called with uppercase/lowercase equivalents. For example, client.hmset()
If you want to store a list of items, you can use Redis lists. To store a list use the following syntax:
The above snippet creates a list called frameworks and pushes two elements to it. So, the length of the list is now
two. As you can see I have passed an args array to rpush. The first item of the array represents the name of the key
while the rest represent the elements of the list. You can also use lpush() instead of rpush() to push the elements
to the left.
To retrieve the elements of the list you can use the lrange() function as following:
Just note that you get all the elements of the list by passing -1 as the third argument to lrange(). If you want a
subset of the list, you should pass the end index here.
Storing Sets
Sets are similar to lists, but the difference is that they don’t allow duplicates. So, if you don’t want any duplicate
elements in your list you can use a set. Here is how we can modify our previous snippet to use a set instead of list.
As you can see, the sadd() function creates a new set with the specified elements. Here, the length of the set is
three. To retrieve the members of the set, use the smembers() function as following:
This snippet will retrieve all the members of the set. Just note that the order is not preserved while retrieving the
members.
This was a list of the most important data structures found in every Redis powered app. Apart from strings, lists,
sets, and hashes, you can store sorted sets, hyperLogLogs, and more in Redis. If you want a complete list of
commands and data structures, visit the official Redis documentation. Remember that almost every Redis
command is exposed on the client object offered by the node_redis module.
Sometimes you may need to check if a key already exists and proceed accordingly. To do so you can use exists()
At times you will need to clear some keys and reinitialize them. To clear the keys, you can use del command as
shown below:
client.set('key1', 'val1');
client.expire('key1', 30);
The above snippet assigns an expiration time of 30 seconds to the key key1.
Redis also supports incrementing and decrementing keys. To increment a key use incr() function as shown below:
The incr() function increments a key value by 1. If you need to increment by a different amount, you can use
incrby() function. Similarly, to decrement a key you can use the functions like decr() and decrby().
Let's assume that you have to parse an URL using JavaScript and NodeJS querystring module.
To accomplish this all you have to do is to insert the following statement in your file:
Well, first, we create a querystring module which provides utilities for parsing and formatting URL query strings. It
can be accessed using:
Then, we parse a URL using the .parse() method. It parses a URL query string (str) into a collection of key and value
pairs.
Unfortunately, Browsers don't have the require method defined, but Node.js does.
Install Browserfy
With Browserify you can write code that uses require in the same way that you would use it in Node. So, how do you
solve this? It's simple.
2. Change into the directory in which your file.js is and Install our querystring module with npm:
Note: If you don't change in the specific directory the command will fail because it can't find the file which contains
the module.
3. Now recursively bundle up all the required modules starting at file.js into a single file called bundle.js (or
whatever you like to name it) with the browserify command:
<script src="bundle.js"></script>
What happens is that you get a combination of your old .js file (file.js that is) and your newly created bundle.js file.
Those two files are merged into one single file.
Important
Please keep in mind that if you want to make any changes to your file.js and will not affect the behaviour
of your program. Your changes will only take effect if you edit the newly created bundle.js
This means that if you want to edit file.js for any reasons, the changes will not have any effects. You really have to
edit bundle.js since it is a merge of bundle.js and file.js.
// by default the collection created in the db would be the first parameter we use (or the plural of
it)
module.exports = mongoose.model('Auto', AutoSchema);
// we can over write it and define the collection name by specifying that in the third parameters.
module.exports = mongoose.model('Auto', AutoSchema, 'collectionName');
Remember methods must be added to the schema before compiling it with mongoose.model() like done above ..
autoObj.save(function(err, insertedAuto) {
if (err) return console.error(err);
insertedAuto.speak();
// output: Hello this is NewName and I have counts of 10
});
You can also specify the second parameter as object of what all fields you need
Methods
update()
updateOne()
updateMany()
replaceOne()
Update()
db.lights.update(
{ room: "Bedroom" },
{ status: "On" }
)
This operation searches the 'lights' collection for a document where room is Bedroom (1st parameter). It then
updates the matching documents status property to On (2nd parameter) and returns a WriteResult object that
looks like this:
UpdateOne
db.countries.update(
{ country: "Sweden" },
{ capital: "Stockholm" }
)
This operation searches the 'countries' collection for a document where country is Sweden (1st parameter). It then
updates the matching documents property capital to Stockholm (2nd parameter) and returns a WriteResult object
that looks like this:
UpdateMany
db.food.updateMany(
{ sold: { $lt: 10 } },
{ $set: { sold: 55 } }
)
This operation updates all documents (in a 'food' collection) where sold is lesser than 10 *(1st parameter) by setting
sold to 55. It then returns a WriteResult object that looks like this:
ReplaceOne
The following operation replaces the document { country: "Spain" } with document { country: "Finland" }
db.countries.replaceOne(
{ country: "Spain" },
{ country: "Finland" }
)
And returns:
This module lets you authenticate using a username and password in your Node.js applications.
newUser.save(function() {
// Pass the user to the callback
return next(null, newUser);
});
}
});
});
Creating routes :
// ...
app.use(passport.initialize());
app.use(passport.session());
// Sign-in route
// Passport strategies are middlewares
app.post('/login', passport.authenticate('localSignin', {
successRedirect: '/me',
failureRedirect: '/login'
});
// Sign-up route
app.post('/register', passport.authenticate('localSignup', {
successRedirect: '/',
failureRedirect: '/signup'
});
app.listen(3000);
Note that passport.serialize() and passport.deserializeUser() methods must be defined. Passport will
serialize and deserialize user instances to and from the session
passport.serializeUser(function(user, next) {
// Serialize the user in the session
next(null, user);
});
// Initializing passport
app.use(passport.initialize());
app.use(passport.session());
Implementing strategy :
newUser.save(function() {
// Pass the user to the callback
return next(null, newUser);
});
}
});
});
Creating routes :
// ...
app.use(passport.initialize());
app.use(passport.session());
// Authentication route
//...
app.listen(3000);
if(user) (
console.log("User Exists!")
//All the data of the user can be accessed by user.x
res.json({"success" : true});
return;
} else {
res.json({"success" : false});
console.log("Error" + errorResponse());
return;
}
})(req, res, next);
});
Consider the following example In this example have created a folder namely config having the passport.js and
google.js file in the root directory. In your app.js include the following
In the passport.js file in the config folder include the following code
module.exports = function(app){
app.use(passport.initialize());
app.use(passport.session());
passport.serializeUser(function(user, done){
done(null, user);
});
passport.deserializeUser(function (user, done) {
done(null, user);
});
google();
};
Here in this example, if user is not in DB then creating a new user in DB for local reference using the field name
googleId in user model.
Decoupling
Modules becomes less couple then it is easy to maintain.
console.log("Hello World");
node helloworld.js
console.log('Starting server...');
var config = {
port: 80,
contentType: 'application/json; charset=utf-8'
};
// JSON-API server on port 80
rl.pause();
console.log('Something long is happening here...');
var cliConfig = {
promptPrefix: ' > '
}
/*
Commands recognition
BEGIN
*/
var commands = {
eval: function(arg) { // Try typing in console: eval 2 * 10 ^ 3 + 2 ^ 4
arg = arg.join(' ');
try { console.log(eval(arg)); }
catch (e) { console.log(e); }
},
exit: function(arg) {
rl.setPrompt(cliConfig.promptPrefix);
rl.prompt();
This is the way to go if you'd like to define all your models in one file, or if you want to have extra control of your
model definition.
/* Initialize Sequelize */
const config = {
username: "database username",
password: "database password",
database: "database name",
host: "database's host URL",
dialect: "mysql" // Other options are postgres, sqlite, mariadb and mssql.
}
var Sequelize = require("sequelize");
var sequelize = new Sequelize(config);
/* Define Models */
sequelize.define("MyModel", {
name: Sequelize.STRING,
comment: Sequelize.TEXT,
date: {
type: Sequelize.DATE,
allowNull: false
}
});
For the documentation and more examples, check out the doclets documentation, or sequelize.com's
documentation.
2. sequelize.import(path)
If your model definitions are broken into a file for each, then import is your friend. In the file where you initialize
Sequelize, you need to call import like so:
/* Initialize Sequelize */
// Check previous code snippet for initialization
/* Define Models */
sequelize.import("./models/my_model.js"); // The path could be relative or absolute
Then in your model definition files, your code will look something like this:
For more information on how to use import, check out sequelize's express example on GitHub.
You will also need to install supported database Node.js modules. You only need to install the one you are using
For PostgreSQL
For SQLite
For MSSQL
Once you have you set up installed you can include and create a new Sequalize instance like so.
ES5 syntax
You now have an instance of sequelize available. You could if you so feel inclined call it a different name such as
or
that part is your prerogative. Once you have this installed you can use it inside of your application as per the API
documentation http://docs.sequelizejs.com/en/v3/api/sequelize/
Your next step after install would be to set up your own model
Now you have to create a PostgreSQL connection, which you can later query.
var pg = require("pg")
var connectionString = "pg://postgres:postgres@localhost:5432/students";
var client = new pg.Client(connectionString);
client.connect();
Now you have to create a mysql connection, which you can later query.
connection.connect();
connection.end();
In the next example you will learn how to query the connection object.
All queries in MySQL connection are done one after another. It means that if you want to do 10 queries and each
query takes 2 seconds then it will take 20 seconds to complete whole execution. The solution is to create 10
connection and run each query in a different connection. This can be done automatically using connection pool
for(var i=0;i<10;i++){
pool.query('SELECT ` as example', function(err, rows, fields) {
if (err) throw err;
console.log(rows[0].example); //Show 1
});
}
Multitenancy is a common requirement of enterprise application nowadays and creating connection pool for each
database in database server is not recommended. so, what we can do instead is create connection pool with
database server and then switch them between databases hosted on database server on demand.
Suppose our application has different databases for each firm hosted on database server. We will connect to
respective firm database when user hits the application. Here is the example on how to do that:
pool.getConnection(function(err, connection){
if(err){
return cb(err);
}
connection.changeUser({database : "firm1"});
connection.query("SELECT * from history", function(err, data){
connection.release();
cb(err, data);
});
});
When defining pool configuration i did not gave the database name but only gave database server i.e
{
connectionLimit : 10,
host : 'example.org',
user : 'bobby',
password : 'pass'
}
so when we want to use the specific database on database server, we ask the connection to hit database by using:
connection.changeUser({database : "firm1"});
SELECT 1; SELECT 2;
You could just run then using pool.query as seen elsewhere, however if you only have one free connection in the
pool you must wait until a connection becomes available before you can run the second query.
You can, however, retain an active connection from the pool and run as many queries as you would like using a
single connection using pool.getConnection:
pool.getConnection(function (err, conn) { if (err) return callback(err); conn.query('SELECT 1 AS seq', function (err,
rows) { if (err) throw err; conn.query('SELECT 2 AS seq', function (err, rows) { if (err) throw err; conn.release();
callback(); }); }); });
Note: You must remember to release the connection, otherwise there is one less MySQL connection available to
the rest of the pool!
For more information on pooling MySQL connections check out the MySQL docs.
module.export = {
getConnection: (callback) => {
return pool.getConnection(callback);
}
}
// app.js
const db = require('./db');
var q = mysql.query('SELECT `name` FROM `pokedex` WHERE `id` = ?', [ 25 ], function (err, result) {
if (err) {
// Table 'test.pokedex' doesn't exist
err.query = q.sql; // SELECT `name` FROM `pokedex` WHERE `id` = 25
callback(err);
}
else {
callback(null, result);
}
});
Multitenancy is common requirement of enterprise application nowadays and creating connection pool for each
database in database server is not recommended. so, what we can do instead is create connection pool with
database server and than switch between databases hosted on database server on demand.
Suppose our application has different databases for each firm hosted on database server. We will connect to
respective firm database when user hits the application. here is the example on how to do that:
pool.getConnection(function(err, connection){
if(err){
return cb(err);
}
connection.changeUser({database : "firm1"});
connection.query("SELECT * from history", function(err, data){
connection.release();
cb(err, data);
});
});
When defining pool configuration i did not gave the database name but only gave database server i.e
{
connectionLimit : 10,
host : 'example.org',
user : 'bobby',
password : 'pass'
}
so when we want to use the specific database on database server, we ask the connection to hit database by using:
connection.changeUser({database : "firm1"});
Step 1: Create a directory/folder by the name of project which you intent to create. Initialize a node application
using npm init command which will create a package.json in current directory .
mkdir mySqlApp
//folder created
cd mwSqlApp
//change to newly created directory
npm init
//answer all the question ..
npm install
//This will complete quickly since we have not added any packages to our app.
Step 2: Now we will create a App.js file in this directory and install some packages which we are going to need to
connect to sql db.
Step 3: Now we will add a basic configuration variable to our application which will be used by mssql module to
establish a connection .
Step 4: This is the easiest step ,where we start the application and the application will connect to the sql server and
print out some simple results .
node App.js
// Output :
// Hello world, This is an app to connect to sql server.
// Connection Successful !
// 1
To use promises or async for query execution refer the official documents of the mssql package :
Promises
Async/Await
Now you have to create an ORACLE connection, which you can later query.
oracledb.getConnection(
{
user : "oli",
password : "password",
connectString : "ORACLE_DEV_DB_TNS_NAME"
},
connExecute
);
The connectString "ORACLE_DEV_DB_TNA_NAME" may live in a tnsnames.org file in the same directory or where
your oracle instant client is installed.
If you don't have any oracle instant client installed on you development machine you may follow the instant
client installation guide for your operating system.
Building up the connection and executing is included in this oracle.js file with content as follows:
'use strict';
const oracledb = require('oracledb');
process.nextTick(function() {
oracleDbRelease(connection);
});
});
})
.catch(function(err) {
reject(err);
});
});
}
module.exports = queryArray;
module.exports.queryArray = queryArray;
module.exports.queryObject = queryObject;
Note that you have both methods queryArray and queryObject to call on your oracle object.
function connRelease(connection)
{
connection.close(
function(err) {
if (err) {
console.error(err.message);
}
});
}
Using the auto function you can define asynchronous relations between two or more functions:
async.auto({
get_data: function(callback) {
console.log('in get_data');
// async code to get some data
callback(null, 'data', 'converted to array');
},
make_folder: function(callback) {
console.log('in make_folder');
// async code to create a directory to store a file in
// this is run at the same time as getting the data
callback(null, 'folder');
},
write_file: ['get_data', 'make_folder', function(results, callback) {
console.log('in write_file', JSON.stringify(results));
// once there is some data and the directory exists,
// write the data to a file in the directory
callback(null, 'filename');
}],
email_link: ['write_file', function(results, callback) {
console.log('in email_link', JSON.stringify(results));
// once the file is written let's email a link to it...
// results.write_file contains the filename returned by write_file.
callback(null, {'file':results.write_file, 'email':'user@example.com'});
}]
}, function(err, results) {
console.log('err = ', err);
console.log('results = ', results);
});
This code could have been made synchronously, by just calling the get_data, make_folder, write_file and
email_link in the correct order. Async keeps track of the results for you, and if an error occurred (first parameter
of callback unequal to null) it stops the execution of the other functions.
try {
var a = 1;
b++; //this will cause an error because be is undefined
console.log(b); //this line will not be executed
} catch (error) {
console.log(error); //here we handle the error caused in the try block
}
In the try block b++ cause an error and that error passed to catch block which can be handled there or even can be
thrown the same error in catch block or make little bit modification then throw. Let's see next example.
try {
var a = 1;
b++;
console.log(b);
} catch (error) {
error.message = "b variable is undefined, so the undefined can't be incremented"
throw error;
}
In the above example we modified the message property of error object and then throw the modified error.
You can through any error in your try block and handle it in the catch block:
try {
var a = 1;
throw new Error("Some error message");
console.log(a); //this line will not be executed;
} catch (error) {
console.log(error); //will be the above thrown error
}
Creates new error object, where the value message is being set to message property of the created object. Usually
the message arguments are being passed to Error constructor as a string. However if the message argument is
object not a string then Error constructor calls .toString() method of the passed object and sets that value to
message property of the created error object.
Each error object has stack trace. Stack trace contains the information of error message and shows where the error
happened (the above output shows the error stack). Once error object is created the system captures the stack
trace of the error on current line. To get the stack trace use stack property of any created error object. Below two
lines are identical:
console.log(err);
console.log(err.stack);
or
or
The last example (throwing strings) is not good practice and is not recommended (always throw errors which are
instances of Error object).
Note that if you throw an error in your, then the system will crash on that line (if there is no exception handlers), no
any code will be executed after that line.
var a = 5;
var err = new Error("Some error message");
throw err; //this will print the error stack and node server will stop
a++; //this line will never be executed
console.log(a); //and this one also
var a = 5;
var err = new Error("Some error message");
console.log(err); //this will print the error stack
a++;
console.log(a); //this line will be executed and will print 6
With the addition of default function parameters you can now make arguments optional and have them default to a
value of your choice.
argumentLength(5) // returns 1
argumentLength(5, 3) //returns 2
argumentLength(5, 3, 6) //returns 3
By prefacing the last argument of your function with ... all arguments passed to the function are read as an array.
In this example we get pass in multiple arguments and get the length of the array created from those arguments.
// Arrow Function
let sum = (a, b)=> a+b;
var arrowFn = () => console.log(this); // refers to window or global object as function is defined
in scope of global/window object
var service = {
constructorFn : function(){
arrowFn : function(){
console.log(this); // refers to service as service object was used to call method.
let fn = () => console.log(this); // refers to service object as arrow function defined in
function which is called using instance object.
fn();
}
}
In arrow function, this is lexical scope which is the scope of function where arrow function is defined.
The first example is the traditional way of defining functions and hence, this refers to global/window object.
In the second example this is used inside arrow function hence this refers to the scope where it is defined(which is
windows or global object). In the third example this is service object as service object is used to call the function.
In fourth example, arrow function in defined and called from the function whose scope is service, hence it prints
service object.
The spread syntax allows an expression to be expanded in places where multiple arguments (for function calls) or
multiple elements (for array literals) or multiple variables are expected. Just like the rest parameters simply preface
your array with ...
An event loop is a loop that waits for events and then reacts to those events
while true:
wait for something to happen
react to whatever happened
Here's a simple form of a HTTP server which is a single threaded but no event loop. The problem here is that it
waits until each request is finished before starting to process the next one. If it takes a while to read the HTTP
request headers or to fetch the file from disk, we should be able to start processing the next request while we wait
for that to finish.
Now we have made our little HTTP server multi threaded. This way, we can immediately move on to the next
request because the current request is running in a background thread. Many servers, including Apache, use this
approach.
But it's not perfect. One limitation is that you can only spawn so many threads. For workloads where you have a
huge number of connections, but each connection only requires attention every once in a while, the multi-threaded
model won't perform very well. The solution for those cases is to use an event loop:
Hopefully this pseudocode is intelligible. Here's what's going on: We wait for things to happen. Whenever a new
connection is created or an existing connection needs our attention, we go deal with it, then go back to waiting.
That way, we perform well when there are many connections and each one only rarely requires attention.
In a real application (not pseudocode) running on Linux, the "wait for the next event to happen" part would be
implemented by calling the poll() or epoll() system call. The "start reading/writing something to a socket" parts
would be implemented by calling the recv() or send() system calls in non-blocking mode.
Reference:
2010
2011
2012
30th January : Node.js creator Ryan Dahl steps away from Node’s day-to-day
25th June : Node.js v0.8.0 [stable] is out
20th December : Hapi, a Node.js framework is released
2013
30th April : The MEAN Stack: MongoDB, ExpressJS, AngularJS and Node.js
17th May : How We Built eBay’s First Node.js Application
15th November : PayPal releases Kraken, a Node.js framework
22nd November : Node.js Memory Leak at Walmart
Eran Hammer of Wal-Mart labs came to the Node.js core team complaining of a memory leak he had
been tracking down for months.
19th December : Koa - Web framework for Node.js
2014
2015
Q1
Q2
Q3
Q4
2016
Q1
Q2
Q3
Q4
Reference
passport.serializeUser(function(user, done) { //In serialize user you decide what to store in the
session. Here I'm storing the user id only.
done(null, user.id);
});
passport.deserializeUser(function(id, done) { //Here you retrieve all the info of the user from the
session storage using the user id stored in the session earlier using serialize user.
db.findById(id, function(err, user) {
done(err, user);
});
});
app.use(session({ secret: 'super secret' })); //to make passport remember the user on other pages
too.(Read about session store. I used express-sessions.)
app.use(passport.initialize());
app.use(passport.session());
Callback functions are common in JavaScript. Callback functions are possible in JavaScript because functions are
first-class citizens.
Synchronous callbacks.
Callback functions can be synchronous or asynchronous. Since Asynchronous callback functions may be more
complex here is a simple example of a synchronous callback function.
First we will step through how the above code is executed. This is more for those who do not already understand
the concept of callbacks if you do already understand it feel free to skip this paragraph. First the code is parsed and
then the first interesting thing to happen is line 6 is executed which outputs Before getSyncMessage call to the
console. Then line 8 is executed which calls the function getSyncMessage sending in an anonymous function as an
argument for the parameter named cb in the getSyncMessage function. Execution is now done inside the
getSyncMessage function on line 3 which executes the function cb which was just passed in, this call sends an
argument string "Hello World" for the param named message in the passed in anonymous function. Execution then
goes to line 9 which logs Hello World! to the console. Then the execution goes through the process of exiting the
callstack (see also) hitting line 10 then line 4 then finally back to line 11.
The function you send in to a function as a callback may be called zero times, once, or multiple times. It all
depends on implementation.
The callback function may be called synchronously or asynchronously and possibly both synchronously and
asynchronously.
Just like normal functions the names you give parameters to your function are not important but the order is.
So for example on line 8 the parameter message could have been named statement, msg, or if you're being
nonsensical something like jellybean. So you should know what parameters are sent into your callback so
Asynchronous callbacks.
One thing to note about JavaScript is it is synchronous by default, but there are APIs given in the environment
(browser, Node.js, etc.) that could make it asynchronous (there's more about that here).
Some common things that are asynchronous in JavaScript environments that accept callbacks:
Events
setTimeout
setInterval
the fetch API
Promises
Also any function that uses one of the above functions may be wrapped with a function that takes a callback and
the callback would then be an asynchronous callback (although wrapping a promises with a function that takes a
callback would likely be considered an anti-pattern as there are more preferred ways to handle promises).
So given that information we can construct an asynchronous function similar to the above synchronous one.
Line execution goes to line 6 logs "Before getSyncMessage call". Then execution goes to line 8 calling
getAsyncMessage with a callback for the param cb. Line 3 is then executed which calls setTimeout with a callback as
the first argument and the number 300 as the second argument. setTimeout does whatever it does and holds on to
that callback so that it can call it later in 1000 milliseconds, but following setting up the timeout and before it
pauses the 1000 milliseconds it hands execution back to where it left off so it goes to line 4, then line 11, and then
pauses for 1 second and setTimeout then calls its callback function which takes execution back to line 3 where
getAsyncMessages callback is called with value "Hello World" for its parameter message which is then logged to the
console on line 9.
NodeJS has asynchronous callbacks and commonly supplies two parameters to your functions sometimes
conventionally called err and data. An example with reading a file text.
const fs = require("fs");
It's good practice to handle the error somehow even if your just logging it or throwing it. The else is not necessary if
you throw or return and can be removed to decrease indentation so long as you stop execution of the current
function in the if by doing something like throwing or returning.
Though it may be common to see err, data it may not always be the case that your callbacks will use that pattern
it's best to look at documentation.
Another example callback comes from the express library (express 4.x):
// this app.get method takes a url route to watch for and a callback
// to call whenever that route is requested by a user.
app.get('/', function(req, res){
res.send('hello world');
});
app.listen(3000);
This example shows a callback that is called multiple times. The callback is provided with two objects as params
named here as req and res these names correspond to request and response respectively, and they provide ways
to view the request coming in and set up the response that will be sent to the user.
As you can see there are various ways a callback can be used to execute sync and async code in JavaScript and
callbacks are very ubiquitous throughout JavaScript.
const fs = require('fs');
let filename = `${__dirname}/myfile.txt`;
It is recommended to nest no more than 2 callback functions. This will help you maintain code readability and will
me much easier to maintain in the future. If you have a need to nest more than 2 callbacks, try to make use of
distributed events instead.
There also exists a library called async that helps manage callbacks and their execution available on npm. It
increases the readability of callback code and gives you more control over your callback code flow, including
allowing you to run them in parallel or in series.
Promises are a tool for async programming. In JavaScript promises are known for their then methods. Promises
have two main states 'pending' and 'settled'. Once a promise is 'settled' it cannot go back to 'pending'. This means
that promises are mostly good for events that only occur once. The 'settled' state has two states as well 'resolved'
and 'rejected'. You can create a new promise using the new keyword and passing a function into the constructor new
Promise(function (resolve, reject) {}).
The function passed into the Promise constructor always receives a first and second parameter usually named
resolve and reject respectively. The naming of these two parameters is convention, but they will put the promise
into either the 'resolved' state or the 'rejected' state. When either one of these is called the promise goes from
being 'pending' to 'settled'. resolve is called when the desired action, which is often asynchronous, has been
performed and reject is used if the action has errored.
timeout(1000).then(function (dataFromPromise) {
// logs "It was resolved!"
console.log(dataFromPromise);
})
console.log("waiting...");
console output
When timeout is called the function passed to the Promise constructor is executed without delay. Then the
setTimeout method is executed and its callback is set to fire in the next ms milliseconds, in this case ms=1000. Since
the callback to the setTimeout isn't fired yet the timeout function returns control to the calling scope. The chain of
then methods are then stored to be called later when/if the Promise has resolved. If there were catch methods
here they would be stored as well, but would be fired when/if the promise 'rejects'.
The script then prints 'waiting...'. One second later the setTimeout calls its callback which calls the resolve function
with the string "It was resolved!". That string is then passed into the then method's callback and is then logged to
the user.
In the same sense you can wrap the asynchronous setTimeout function which requires a callback you can wrap any
singular asynchronous action with a promise.
setTimeout(function() {
console.log("A");
}, 1000);
setTimeout(function() {
console.log("B");
}, 0);
getDataFromDatabase(function(err, data) {
console.log("C");
setTimeout(function() {
console.log("D");
}, 1000);
});
console.log("E");
Output: This is known for sure: EBAD. C is unknown when it will be logged.
Explanation: The compiler will not stop on the setTimeout and the getDataFromDatabase methodes. So the first
line he will log is E. The callback functions (first argument of setTimeout) will run after the set timeout on a
asynchronous way!
More details:
1. E has no setTimeout
2. B has a set timeout of 0 milliseconds
3. A has a set timeout of 1000 milliseconds
4. D must request a database, after it must D wait 1000 milliseconds so it comes after A.
5. C is unknown because it is unknown when the data of the database is requested. It could be before or after A.
Errors must always be handled. If you are using synchronous programming you could use a try catch. But this
does not work if you work asynchronous! Example:
try {
setTimeout(function() {
throw new Error("I'm an uncaught error and will stop the server!");
}, 100);
}
catch (ex) {
console.error("This error will not be work in an asynchronous situation: " + ex);
}
Working possibilities
Version ≤ v0.8
Event handlers
Inside a domain, the errors are release via the event emitters. By using this are all errors, timers, callback methodes
implicitly only registrated inside the domain. By an error, be an error event send and didn't crash the application.
d1.run(function() {
d2.add(setTimeout(function() {
throw new Error("error on the timer of domain 2");
}, 0));
});
d1.on("error", function(err) {
console.log("error at domain 1: " + err);
});
d2.on("error", function(err) {
console.log("error at domain 2: " + err);
});
The process object is a global that provides information about, and control over, the current Node.js process. As a
global, it is always available to Node.js applications without using require().
The process.stdout property returns a Writable stream equivalent to or associated with stdout.
process.stdin.resume()
console.log('Enter the data to be displayed ');
process.stdin.on('data', function(data) { process.stdout.write(data) })
Let's dissect this. MongoDB and Mongoose use JSON(actually BSON, but that's irrelevant here) as the data format.
At the top, I've set a few variables to reduce typing.
I create a new Schema and assign it to a constant. It's simple JSON, and each attribute is another Object with
properties that help enforce a more consistent schema. Unique forces new instances being inserted in the
database to, obviously, be unique. This is great for preventing a user creating multiple accounts on a service.
Required is another, declared as an array. The first element is the boolean value, and the second the error message
should the value being inserted or updated fail to exist.
ObjectIds are used for relationships between Models. Examples might be 'Users have many Comments`. Other
Lastly, exporting the model for use with your API routes provides access to your schema.
app.use('/api', routes);
};
We can now get the data from our database by sending an HTTP request to this endpoint. A few key things, though:
1. Limit does exactly what it looks like. I'm only getting 5 documents back.
2. Lean strips away some stuff from the raw BSON, reducing complexity and overhead. Not required. But
useful.
3. When using find instead of findOne, confirm that the doc.length is greater than 0. This is because find
always returns an array, so an empty array will not handle your error unless it is checked for length
4. I personally like to send the error message in that format. Change it to suit your needs. Same thing for the
returned document.
5. The code in this example is written under the assumption that you have placed it in another file and not
directly on the express server. To call this in the server, include these lines in your server code:
var countries = [
{"key": "DE", "name": "Deutschland", "active": false},
{"key": "ZA", "name": "South Africa", "active": true}
];
This assumes you have a file named data.csv in the same folder.
'use strict'
const fs = require('fs');
You can now use the array like any other to do work on it.
First of all, it needs a config file, let's create it. For Debian based distros, it will be in
/etc/systemd/system/node.service
[Unit]
Description=My super nodejs app
[Service]
# set the working directory to have consistent relative paths
WorkingDirectory=/var/www/app
# send log tot syslog here (it doesn't compete with other log config in the app itself)
StandardOutput=syslog
StandardError=syslog
[Install]
# start node at multi user system level (= sysVinit runlevel 3)
WantedBy=multi-user.target
It's now possible to respectively start, stop and restart the app with:
To tell systemd to automatically start node on boot, just type: systemctl enable node.
In the exemple, we'll set it up for the wider configuration (authorize all request types from any domain.
Usually, node is ran behind a proxy on production servers. Therefore the reverse proxy server (such as Apache or
Nginx) will be responsible for the CORS config.
To conveniently adapt this scenario, it's possible to only enable node.js CORS when it's in development.
The tool of my preference is chrome devtools or chrome inspector coupled with the node-inspector.
$ node-inspector
Step 4 : Open http://127.0.0.1:8080/?port=5858 in the Chrome browser. And you will see a chrom-dev tools
interface with your nodejs application source code in left panel . And since we have used debug break option while
debugging the application the code execution will stop at the first line of code.
Step 5 : This is the easy part where you switch to the profiling tab and start profiling the application . In case you
want get the profile for a particular method or flow make sure the code execution is break-pointed just before that
piece of code is executed.
You can use this articles to know how to read the profiles :
let encoder = {
hasEncoder : false,
contentEncoding: {},
createEncoder : () => throw 'There is no encoder'
}
if (!acceptsEncoding) {
acceptsEncoding = ''
}
if (acceptsEncoding.match(/\bdeflate\b/)) {
encoder = {
hasEncoder : true,
contentEncoding: { 'content-encoding': 'deflate' },
createEncoder : zlib.createDeflate
}
} else if (acceptsEncoding.match(/\bgzip\b/)) {
encoder = {
hasEncoder : true,
contentEncoding: { 'content-encoding': 'gzip' },
createEncoder : zlib.createGzip
}
}
response.writeHead(200, encoder.contentEncoding)
if (encoder.hasEncoder) {
stream = stream.pipe(encoder.createEncoder())
}
stream.pipe(response)
}).listen(1337)
// this will postpone tick run step's while-loop to event loop cycles
// any other IO-bound operation (like filesystem reading) can take place
// in parallel
tick(1e+6)
tick(1e+7)
console.log('this will output before all of tick operations. i = %d', i)
console.log('because tick operations will be postponed')
tick(1e+8)
In simpler terms, Event Loop is a single-threaded queue mechanism which executes your CPU-bound code until
end of its execution and IO-bound code in a non-blocking fashion.
However, Node.js under the carpet uses multi-threading for some of its operations through libuv Library.
Performance Considerations
Non-blocking operations will not block the queue and will not effect the performance of the loop.
However, CPU-bound operations will block the queue, so you should be careful not to do CPU-bound
operations in your Node.js code.
Node.js non-blocks IO because it offloads the work to the operating system kernel, and when the IO operation
supplies data (as an event), it will notify your code with your supplied callbacks.
http API is using a "Global Agent". You can supply your own agent. Like this:
options.agent = false
Pitfalls
You should do the same thing for https API if you want the same effects
Create and navigate to a new directory to hold your package, and then run yarn init
{
"name": "my-package",
"version": "1.0.0",
"description": "A test package",
"main": "index.js",
"author": "StackOverflow Documentation",
"license": "MIT"
}
Now lets try adding a dependency. The basic syntax for this is yarn add [package-name]
This will add a dependencies section to your package.json, and add ExpressJS
"dependencies": {
"express": "^4.15.2"
}
macOS
MacPorts
sudo port install yarn
Add the following to your preferred shell profile (.profile, .bashrc, .zshrc etc)
Windows
Installer
Chocolatey
choco install yarn
Linux
Debian / Ubuntu
Install Yarn
Install Yarn
Arch
yaourt -S yarn
Solus
All Distributions
Add the following to your preferred shell profile (.profile, .bashrc, .zshrc etc)
Tarball
cd /opt
wget https://yarnpkg.com/latest.tar.gz
tar zvxf latest.tar.gz
Npm
Post Install
yarn --version
If you need a specific version of the package, you can use yarn add package@version.
If the version you need to install has been tagged, you can use yarn add package@tag.
Important: You will need to install redis database on your machine, Download it from here for linux
users and from here to install windows version, and we will be using redis manager desktop app, install it
from here.
app.use(bodyParser.json());
app.oauth = oauthserver({
model: require('./routes/Oauth2/model'),
grants: ['password', 'refresh_token'],
debug: true
});
// Error handling
app.use(app.oauth.errorHandler());
app.listen(3000);
var db = redis.createClient();
callback(null, {
accessToken: token.accessToken,
clientId: token.clientId,
expires: token.expires ? new Date(token.expires) : null,
userId: token.userId
});
});
};
callback(null, {
clientId: client.clientId,
clientSecret: client.clientSecret
});
});
};
callback(null, {
refreshToken: token.accessToken,
clientId: token.clientId,
expires: token.expires ? new Date(token.expires) : null,
userId: token.userId
});
});
};
callback(null, {
id: username
});
});
};
You only need to install redis on your machine and run the following node file
#! /usr/bin/env node
var db = require('redis').createClient();
db.multi()
.hmset('users:username', {
id: 'username',
username: 'username',
password: 'password'
})
.hmset('clients:client', {
clientId: 'client',
clientSecret: 'secret'
})//clientId + clientSecret to base 64 will generate Y2xpZW50OnNlY3JldA==
.sadd('clients:client:grant_types', [
'password',
'refresh_token'
])
.exec(function (errs) {
if (errs) {
console.error(errs[0].message);
return process.exit(1);
}
Note: This file will set credentials for your frontend to request token So your request from
Header:
1. authorization: Basic followed by the password set when you first setup redis:
grant_type: depends on what options do you want, I choose passwod which takes only username
and password to be created in redis, Data on redis will be as below:
{
"access_token":"1d3fe602da12a086ecb2b996fd7b7ae874120c4f",
"token_type":"bearer", // Will be used to access api + access+token e.g. bearer
1d3fe602da12a086ecb2b996fd7b7ae874120c4f
"expires_in":3600,
"refresh_token":"b6ad56e5c9aba63c85d7e21b1514680bbf711450"
}
So We need to call our api and grab some secured data with our access token we have just created, see below:
when token expires api will throw an error that the token expires and you cannot have access to any of the api calls,
see image below :
Hope to Help!
// usual requirements
var express = require('express'),
i18n = require('i18n'),
app = module.exports = express();
i18n.configure({
// setup some locales - other locales default to en silently
locales: ['en', 'ru', 'de'],
app.configure(function () {
// you will need to use cookieParser to expose cookies to req.cookies
app.use(express.cookieParser());
});
// serving homepage
app.get('/', function (req, res) {
res.send(res.__('Hello World'));
});
// starting server
if (!module.parent) {
app.listen(3000);
}
{
"name": "app-name",
"script": "server",
"exec_mode": "cluster",
"instances": 0,
"wait_ready": true
"listen_timeout": 10000,
"kill_timeout": 5000,
}
wait_ready
listen_timeout
kill_timeout
server.js
server.listen(port, function() {
process.send('ready');
});
process.on('SIGINT', function() {
server.close(function() {
process.exit(0);
});
});
You might need to wait for your application to have etablished connections with your
DBs/caches/workers/whatever. PM2 needs to wait before considering your application as online. To do this, you
need to provide wait_ready: true in a process file. This will make PM2 listen for that event. In your application you
will need to add process.send('ready'); when you want your application to be considered as ready.
When a process is stopped/restarted by PM2, some system signals are sent to your process in a given order.
First a SIGINT a signal is sent to your processes, signal you can catch to know that your process is going to be
mkdir our_project
cd our_project
Now we're in the place where our code will live. To create the main archive of our project you can run
It's simple:
Linux distros and Mac should use sudo to install this because they're installed in the nodejs directory which is only
accessible by the root user. If everything went fine we can, finally, create the express-app skeleton, just run
express
This command will create inside our folder an express example app. The structure is as follow:
bin/
public/
routes/
views/
app.js
package.json
Now if we run npm start an go to http://localhost:3000 we'll see the express app up and running, fair enough we've
generated an express app without too much trouble, but how can we mix this with AngularJS?.
Express is a framework built on top of Nodejs, you can see the official documentation at the Express Site. But for
our purpose we need to know that Express is the responsible when we type, for example,
http://localhost:3000/home of rendering the home page of our application. From the recently created app created
we can check:
FILE: routes/index.js
var express = require('express');
var router = express.Router();
module.exports = router;
extends layout
block content
h1= title
p Welcome to #{title}
This is another powerful Express feature, template engines, they allow you to render content in the page by
passing variables to it or inherit another template so your pages are more compact and better understandable by
others. The file extension is .jade as far as I know Jade changed the name for Pug, basically is the same template
engine but with some updates and core modifications.
Ok, to start using Pug as the template engine of our project we need to run:
This will install Pug as a dependency of our project and save it to package.json. To use it we need to modify the file
app.js:
And replace the line of view engine with pug and that's all. We can run again our project with npm start and we'll
see that everything is working fine.
After we downloaded AngularJS when should copy the file to our public/javascripts folder inside our project, a
little explanation, this is the folder that serves the static assets of our site, images, css, javacript files and so on. Of
course this is configurable through the app.js file, but we'll keep it simple. Now we create a file named ng-app.js,
the file where our application will live, inside our javascripts public folder, just where AngularJS lives. To bring
AngularJS up we need to modify the content of views/layout.pug as follow:
doctype html
html(ng-app='first-app')
head
title= title
link(rel='stylesheet', href='/stylesheets/style.css')
body(ng-controller='indexController')
block content
script(type='text-javascript', src='javascripts/angular.min.js')
script(type='text-javascript', src='javascripts/ng-app.js')
What are we doing here?, well, we're including AngularJS core and our recently created file ng-app.js so when the
angular.module('first-app', [])
.controller('indexController', ['$scope', indexController]);
function indexController($scope) {
$scope.name = 'sigfried';
}
We're using the most basic AngularJS feature here, two-way data binding, this allows us to refresh the content of
our view and controller instantly, this is a very simple explanation, but you can make a research in Google or
StackOverflow to see how it really works.
So, we have the basic blocks of our AngularJS application, but there is something we got to do, we need to update
our index.pug page to see the changes of our angular app, let's do it:
extends layout
block content
div(ng-controller='indexController')
h1= title
p Welcome {{name}}
input(type='text' ng-model='name')
Here we're just binding the input to our defined property name in the AngularJS scope inside our controller:
$scope.name = 'sigfried';
The purpose of this is that whenever we change the text in the input the paragraph above will update it content
inside the {{name}}, this is called interpolation, again another AngularJS feature to render our content in the
template.
So, all is setup, we can now run npm start go to http://localhost:3000 and see our express application serving the
page and AngularJS managing the application frontend.
Express server came handy and it deeps through many user and community. It is getting popular.
Lets create a Express Server. For Package Management and Flexibility for Dependency We will use NPM(Node
Package Manager).
{
"name": "expressRouter",
"version": "0.0.1",
"scripts": {
"start": "node Server.js"
},
"dependencies": {
"express": "^4.12.3"
}
}
2. Save the file and install the express dependency using following command npm install. This will create
node_modules in you project directory along with required dependency.
3. Let's create Express Web Server. Go to the Project directory and create server.js file. server.js
});
app.use("/api",router);
// Listen to this Port
app.listen(3000,function(){
console.log("Live at Port 3000");
});
node server.js
http://localhost:3000/api/
router.use(function(req,res,next) {
console.log("/" + req.method);
next();
});
app.use("/api",router);
app.listen(3000,function(){
console.log("Live at Port 3000");
});
http://localhost:3000/api/
router.get("/user/:id",function(req,res){
res.json({"message" : "Hello "+req.params.id});
});
'use strict';
module.exports = {
};
index.js
// classic callbacks
// promises
math.promiseSum(2, 5)
.then(function(result) {
console.log('Test 3: the answer is ' + result);
})
.catch(function(err) {
console.log('Test 3: ' + err);
});
math.promiseSum(1)
.then(function(result) {
console.log('Test 4: the answer is ' + result);
})
.catch(function(err) {
console.log('Test 4: ' + err);
});
math.sum(8, 2)
.then(function(result) {
console.log('Test 5: the answer is ' + result);
})
.catch(function(err) {
console.log('Test 5: ' + err);
});
(async () => {
try {
let x = await math.sum(6, 3);
console.log('Test 7a: ' + x);
} catch(err) {
console.log(err.message);
}
})();
|-- Config
|-- config.json
|-- appConfig
|-- pets.config
|-- payment.config
Now the most vital directories where we distinguish between the server side/backend and the frontend
modules . The 2 directories server and webapp represent the backend and frontend respectively which we
can choose to put inside a source directory viz. src.
You can go with different names as per personal choice for server or webapp depending on what
makes sense for you. Make sure you don't want to make it too long or to complex as it is in the end
internal project structure.
Inside the server directory you can have the controller ,the App.js/index.js which will be you main nodejs file
and start point .The server dir. can also have the dto dir which holds all the data transfer objects which will be
usd by API controllers.
|-- server
|-- dto
|-- pet.js
|-- payment.js
|-- controller
|-- PetsController.js
|-- PaymentController.js
|-- App.js
The webapp directory can be divided into two major parts public and mvc , this is again influenced by what
build strategy you want to use. We are using browserfiy the build the MVC part of webapp and minimize the
contents from mvc directory simply put.
Now the public directory can contain all the static resources,images,css(you can have saas files as well) and
|-- public
|-- build // will contianed minified scripts(mvc)
|-- images
|-- mouse.jpg
|-- cat.jpg
|-- styles
|-- style.css
|-- views
|-- petStore.html
|-- paymentGateway.html
|-- header.html
|-- footer.html
|-- index.html
The mvc directory will contain the front-end logic including the models,the view controllers and any other utils
modules you may need as part of UI. Also the index.js or shell.js whichever may suite you is part of this
directory as well.
|-- mvc
|-- controllers
|-- Dashborad.js
|-- Help.js
|-- Login.js
|-- utils
|-- index.js
So in conclusion the entire project structure will look like below.And a simple build task like gulp browserify will
minify the mvc scripts and publish in public directory. We can then provide this public directory as static resource
via express.use(satic('public' )) api.
|-- node_modules
|-- src
|-- server
|-- controller
|-- App.js // node app
|-- webapp
|-- public
|-- styles
|-- images
|-- index.html
|-- mvc
|-- controller
|-- shell.js // mvc shell
|-- config
|-- Readme.md
|-- .gitignore
|-- package.json
Example:
Many helper methods exist in Async that can be used in different situations, like series, parallel, waterfall, etc. Each
function has a specific use-case, so take some time to learn which one will help in which situations.
As good as Async is, like anything, its not perfect. Its very easy to get carried away by combining series, parallel,
forever, etc, at which point you're right back to where you started with messy code. Be careful not to prematurely
optimize. Just because a few async tasks can be run in parallel doesn't always mean they should. In reality, since
Node is only single-threaded, running tasks in parallel on using Async has little to no performance gain.
The source is available for download from https://github.com/caolan/async . Alternatively, you can install using
npm:
var fs = require('fs');
var async = require('async');
async.waterfall([
function(callback) {
fs.readFile(myFile, 'utf8', callback);
},
function(txt, callback) {
Sample to start this topic is Node.js server communicating with Arduino via serialport.
Sample app.js:
arduinoSerialPort.on('open',function() {
console.log('Serial Port ' + arduinoCOMPort + ' is opened.');
});
return res.send('Working');
})
if(action == 'led'){
arduinoSerialPort.write("w");
return res.send('Led light is on!');
}
if(action == 'off') {
arduinoSerialPort.write("t");
return res.send("Led light is off!");
}
});
app.listen(port, function () {
console.log('Example app listening on port http://0.0.0.0:' + port + '!');
node app.js
Arduino code
// the setup function runs once when you press reset or power the board
void setup() {
// initialize digital pin LED_BUILTIN as an output.
pinMode(LED_BUILTIN, OUTPUT);
digitalWrite(LED_BUILTIN, LOW);
}
Starting Up
http://0.0.0.0:3000/led
http://0.0.0.0:3000/off
#include <node_api.h>
#include <stdio.h>
printf("Hello world\n");
return retval;
}
if (status != napi_ok)
return;
}
NAPI_MODULE(hello, init)
However, Node.js itself runs multi-threaded. I/O operations and the like will run from a thread pool. Further will any
instance of a node application run on a different thread, therefore to run multi-threaded applications one launches
multiple instances.
Clustering is desirable when the different instances have the same flow of execution and don't depend on one
another. In this scenario, you have one master that can start forks and the forks (or children). The children work
independently and have their one space of Ram and Event Loop.
Setting up clusters can be beneficial for websites / APIs. Any thread can serve any customer, as it doesn't depend on
other threads. A Database (like Redis) would be used to share Cookies, as variables can't be shared! between the
threads.
if (cluster.isMaster) {
// runs only once (within the master);
console.log('I am the master, launching workers!');
for(var i = 0; i < numCPUs; i++) cluster.fork();
} else {
// runs in each fork
console.log('I am a fork!');
The communication goes both ways, so parent and child can listen for messages and send messages.
Parent (../parent.js)
Child (../child.js)
})
Next to message one can listen to many events like 'error', 'connected' or 'disconnect'.
Starting a child process has a certain cost associated with it. One would want to spawn as few of them as possible.
Installation
npm install --save activedirectory
Usage
// Initialize
var ActiveDirectory = require('activedirectory');
var config = {
url: 'ldap://dc.domain.com',
baseDN: 'dc=domain,dc=com'
};
var ad = new ActiveDirectory(config);
var username = 'john.smith@domain.com';
var password = 'password';
// Authenticate
ad.authenticate(username, password, function(err, auth) {
if (err) {
console.log('ERROR: '+JSON.stringify(err));
return;
}
if (auth) {
console.log('Authenticated!');
}
else {
console.log('Authentication failed!');
}
});
Require is an import of certain files or packages used with NodeJS's modules. It is used to improve code structure
and uses. require() is used on files that are installed locally, with a direct route from the file that is require'ing.
function analyzeWeather(weather_data) {
console.log('Weather information for ' + weather_data.time + ': ');
console.log('Rainfall: ' + weather_data.precip);
console.log('Temperature: ' + weather_data.temp);
//More weather_data analysis/printing...
}
This file contains only the method, analyzeWeather(weather_data). If we want to use this function, it must be
either used inside of this file, or copied to the file it wants to be used by. However, Node has included a very useful
tool to help with code and file organization, which is modules.
In order to utilize our function, we must first export the function through a statement at the beginning. Our new
file looks like this,
module.exports = {
analyzeWeather: analyzeWeather
}
function analyzeWeather(weather_data) {
console.log('Weather information for ' + weather_data.time + ': ');
console.log('Rainfall: ' + weather_data.precip);
console.log('Temperature: ' + weather_data.temp);
//More weather_data analysis/printing...
}
With this small module.exports statement, our function is now ready for use outside of the file. All that is left to do
is to use require().
When require'ing a function or file, the syntax is very similar. It is usually done at the beginning of the file and set
to var's or const's for use throughout the file. For example, we have another file (on the same level as analyze.js
named handleWeather.js that looks like this,
weather_data = {
time: '01/01/2001',
precip: 0.75,
temp: 78,
//More weather data...
};
analysis.analyzeWeather(weather_data);
In this file, we are using require() to grab our analysis.js file. When used, we just call the variable or constant
When this file is run, it first require's (imports) the package you just installed called request. Inside of the request
file, there are many functions you now have access to, one of which is called get. In the next couple lines, the
function is used in order to make an HTTP GET request.
For modular code structure the logic should be divided into these directories and files.
Controllers - The controllers handles all the logic behind validating request parameters, query, Sending
Responses with correct codes.
Services - The services contains the database queries and returning objects or throwing errors
This coder will end up writing more codes. But at the end the codes will be much more maintainable and seperated.
module.exports = User;
user.routes.js
var express = require('express');
var router = express.Router();
router.get('/', UserController.getUsers)
user.controllers.js
var UserService = require('../services/user.service')
user.services.js
var User = require('../models/user.model')
try {
var users = await User.find(query)
return users;
} catch (e) {
// Log Errors
throw Error('Error while Paginating Users')
}
}
So if you wanna make web app notification I suggest you to use Push.js or SoneSignal framework for Web/mobile
app.
Push is the fastest way to get up and running with Javascript notifications. A fairly new addition to the official
specification, the Notification API allows modern browsers such as Chrome, Safari, Firefox, and IE 9+ to push
notifications to a user’s desktop.
You will have to use Socket.io and some backend framework, I will user Express for this example.
After you are done with that, you should be good to go. This is how it should look like if u wanna make simple
notification:
Push.create('Hello World!')
I will assume that you know how to setup Socket.io with your app. Here is some code example of my backend app
with express:
server.listen(80);
});
There you go, now you should be able to display your notification, this also works on any Android device, and if u
wanna use Firebase cloud messaging, you can use it with this module, Here is link for that example written by Nick
(creator of Push.js)
For windows there is a nvm-windows package with an installer. This GithHub page has the details for installing and
using the nvm-windows package.
After installing nvm, run "nvm on" from command line. This enables nvm to control the node versions.
Note: You may need to restart your terminal for it to recognize the newly installed nvm command.
You can also install a specific Node version, by passing the major, minor, and/or patch versions:
$ nvm install 6
$ nvm install 4.2
$ nvm ls-remote
You can then switch versions by passing the version the same way you do when installing:
$ nvm use 5
You can set a specific version of Node that you installed to be the default version by entering:
To display a list of Node versions that are installed on your machine, enter:
$ nvm ls
To use project-specific node versions, you can save the version in .nvmrc file. This way, starting to work with
another project will be less error-prone after fetching it from its repository.
When Node is installed via nvm we don't have to use sudo to install global packages since they are installed in home
folder. Thus npm i -g http-server works without any permission errors.
brew update
You may need to change permissions or paths. It's best to run this before proceeding:
brew doctor
Once Node.js is installed, you can validate the version installed by running:
node -v
Macports
You can now run node through CLI directly by invoking node. Also, you can check your current node version with
node -v
All Node.js binaries, installers, and source files can be downloaded here.
You can download just the node.exe runtime or use the Windows installer (.msi), which will also install npm, the
recommended package manager for Node.js, and configure paths.
You can also install by package manager Chocolatey (Software Management Automation).
More information about current version, you can find in the choco repository here.
# the node & npm versions in apt are outdated. This is how you can update them:
sudo npm install -g npm
sudo npm install -g n
sudo n stable # (or lts, or a specific version)
Using the latest of specific version (e.g. LTS 6.x) directly from nodesource
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
apt-get install -y nodejs
Also, for the right way to install global npm modules, set the personal directory for them (eliminates the need for
sudo and avoids EACCES errors):
mkdir ~/.npm-global
echo "export PATH=~/.npm-global/bin:$PATH" >> ~/.profile
source ~/.profile
npm config set prefix '~/.npm-global'
latest
n latest
stable
n stable
lts
n lts
n <version>
e.g. n 4.4.7
If this version is already installed, this command will activate that version.
Switching versions
n by itself will produce a selection list of installed binaries. Use up and down to find the one you want and Enter to
activate it.
[optional]
sudo apt-get install git
cd ~
git clone https://github.com/nodejs/node.git
cd ~
wget https://nodejs.org/dist/v6.3.0/node-v6.10.2.tar.gz
tar -xzvf node-v6.10.2.tar.gz
./configure
make
sudo make install
git
clang and clang++ 3.4^ or gcc and g++ 4.8^
Python 2.6 or 2.7
GNU Make 3.81^
Get source
Node.js v7.x
Build
cd node
./configure
make -jX
su -c make install
Cleanup [Optional]
cd
rm -rf node
This guide assumes you are already using Fish as your shell.
Install nvm
Install Oh My Fish
(Note: You will be prompted to restart your terminal at this point. Go ahead and do so now.)
We will install plugin-nvm via Oh My Fish to expose nvm capabilities within the Fish shell:
You are now ready to use nvm. You may install and use the version of Node.js of your liking. Some examples:
Final Notes
Remember again, that we no longer need sudo when dealing with Node.js using this method! Node versions,
packages, and so on are installed in your home directory.