CHAPTER-1
Your First Node API
Hello Node.js
Node.js was first released in 2009 by Ryan Dahl as a reaction to how slow web servers were at the time. Most web servers would block for any I/O task‘, such as reading from the file system or accessing the network, and this would dramatically lower their throughput. For example, if a server was receiving a file upload, it would not be able to handle any other request until the upload was finished.
At that time, Dahl mostly worked with Ruby, and the dominant model for web applications was to have a pool of ruby processes that a web server (e.g. Ngninx) would proxy to. If one Ruby process was blocked with an upload, Nginx served the request to another.
Node.js changed this model by making all I/O tasks non-blocking and asynchronous. This allowed web servers written in Node.js to serve thousands of requests concurrently — subsequent requests didn’t have to wait for previous ones to complete.
The first demo of Node.js was generated so much interest because it was the first time that a developer could create their own web server easily and have it work so well.
Over time Node.js became good at system tasks other than web serving and started to shine as a flexible yet lower level server-side language. It could do anything typically done with Python, Ruby, Perl, and PHP, and it was faster, used less memory, and in most cases had better APIs for the system calls.
For example, with Node.js we can create HTTP and TCP servers with only a few lines of code. We’ll dive in and build one together soon, but just to show what we mean, here’s a functioning Node.js web server in only 80 characters:
require('http')
.createServer((req, res) -> res.end( 'hello world! '))
listen(8B8B)
A Rich Module Ecosystem
Node.js began to shine with the introduction of npm, the package manager bundled with Node.js. A core philosophy of Node.js is to have only a small collection of built-in modules that come preinstalled with the language.
Examples of these modules are fs, http, tcp, dns, events, child_process, and crypto. There’s a full list in the Node.js API documentation’.
This may seem to be a bad thing. Many people would be puzzled as why Node.js would choose not to have a large collection of standard modules preinstalled and available to the user. The reason is a bit counterintuitive, but has ultimately been very successful.
Node.js wanted to encourage a rich ecosystem of third-party modules. Any module that becomes a built-in, core module will automatically prevent competition for its features. In addition, the core module can only be updated on each release of Node.js.
This has a two-fold suppression effect on module authorship. First, for each module that becomes a core module in the standard library, many third-party modules that perform a similar feature will never be created. Second, any core modules will have development slowed by the Node.js release schedule.
This strategy has been a great success. n pm modules have grown at an incredible pace, overtaking all other package managers. In fact, one of the best things about Node.js is having access to a gigantic number of modules.
When To Use Node.js
Node.js is a great choice for any task or project where one would typically use a dynamic language like Python, PHP, Perl, or Ruby. Node.js particularly shines when used for:
- HTTP APIs,
- distributed systems,
- command-line tools, and
- cross-platform desktop applications
Node.js was created to be a great web server and it does not disappoint. In the next section, we’ll see how easy it is to build an HTTP API with the built-in core http module.
Web servers and HTTP APIs built with Node.js generally have much higher performance than other dynamic languages like Python, PHP, Perl, and Ruby. This is partly because of its non-blocking nature, and partly because the Node.js V8 JavaScript interpreter is so well optimized.
There are many popular web and API frameworks built with Node.js such as express , hap i7, and restify”.
Distributed systems are also very easy to build with Node.js. The core tcp module makes it very easy to communicate over the network, and useful abstractions like streams allow us to build systems using composable modules like dnode9.
Command-line tools can make a developer’s life much easier. Before Node.js, there wasn’t a good way to create CLIs with JavaScript. If you’re most comfortable with JavaScript, Node.js will be the best way to build these programs. In addition, there are tons of Node.js modules like yargs”, chalk”, and blessed l2 that make writing CLIs a breeze.
Electron”, allows us to build cross-platform desktop applications using JavaScript, HTML, and CSS. It combines a browser GUI with Node.js. Using Node.js we’re able to access the filesystem, network, and other operating system resources. There’s a good chance you use a number of Electron apps regularly.
When Node.js May Not Be The Best Choice
Node.js is a dynamic, interpreted language. It is very fast compared to other dynamic languages thanks to the V8 JIT compiler. However, if you are looking for a language that can squeeze the most performance out of your computing resources, Node.js is not the best.
CPU-bound workloads can typically benefit from using a lower-level language like C, C++, Go, Java, or Rust. As an extreme example, when generating fibonacci numbers“ Rust and C are about three times faster than Node.js. If you have a specialized task that is particularly sensitive to performance, and does not need to be actively developed and maintained, consider using a lower-level level language.
Certain specialized software communities like machine learning, scientific computing, and data science have traditionally used languages other than JavaScript. Over time they have created many packages, code examples, tutorials, and books using languages like Python, R, and Java that either do not exist in JavaScript, are not at the same level of maturity, or do not have the same level of optimization and performance. Node.js might become more popular for these tasks in the future as more flagship projects like TensorFlow . js” are developed. However, at this current time, fewer people in these disciplines have much Node.js experience.
Front-end Vs. Back-end JavaScript
If you’re more familiar with using JavaScript in the browser than you are with using it in Node.js, there a few differences worth paying attention to.
The biggest difference between Node.js and running JavaScript in the browser is the lack of globals and common browser APIs. For example, wi ndow“ and document 17 are unavailable in Node.js. Ofcourse, this should not be not surprising; Node.js does not need to maintain a DOM or other browser- related technologies to function. For a list of global objects that browsers and Node.js share, see MDN’s list of Standard Built-in Objects“.
Both Node.js and browser-based JavaScript can perform many of the same functions such as access the network or filesystem. However, the way these functions are accomplished will be different. For example, in the browser one will use the globally available fetch( ) API to create an HTTP request. In Node.js, this type of action would be done by first using const http require( ‘ http ‘ ) to load the built-in core http module, and afterwards using http . get( ‘ http: //www . fuIIstack . i o/ ‘ ,function (res) { . . . }).
Diving In: Your First Node.js API
We’re going to start off by creating our own web server. At first, it will be very simple; you’ll be able to open your browser and see some text. What makes this impressive is just how little code is required to make this happen.
const http = require('http')
const port - process.env.PORT | | 1337
const server - http.createServer(function (req, res) (
res.end('hi')
server. listen(port)
console.log( Server listening on port ${port} )
Run this file with node 81 – server. j s, and you should see Server listening on port 133T printed to your terminal:
After you see that, open your browser and go to http://localhost: 1337 to see your message:
Hello!
Let’s look at this file line by line. First up:
const http - require('http')
This loads the core http1’ module and stores it in our http variable. require( ) 2‘ is a globally accessible function in Node.js and is always available. http is a core module, which means that it is always available to be loaded via require( ). Later on we’ll cover third-party modules that need to be installed before we can load them using require( ) .
const port - process.env.PORT | | 1337
Here we choose which port our web server should listen to for requests. We store the port number in our port variable.
Also, we encounter a Node.js global object, process”. process is a global object22 with information about the currently running process, in our case it’s the process that is spawned when we run node 81 – server . j s. process . env is an object that contains all environment variables. If we were to run the server with PORT=3888 node 81 -server . js instead, process . env . PORT would be set to 3888. Having environment variable control over port usage is a useful convention for deployment, and we’ll be starting that habit early.
const server - http.createServer(function (req, res) (
res.end('hi ')
Now we get to the real meat of the file. We use http.creabeserver( )°” to create a HTTP server object and assign it to the server variable. http.createserver( ) accepts a single argument: a request listener function.
Our request listener function will be called every time there’s an HTTP request to our server (e.g., each time you hit http://locaIhost:1337 in your browser). Every time it is called, this function will receive two arguments: a request object2‘ (req) and a response object2’ (res).
For now we’re going to ignore req, the request object. Later on we’ll use it to get information about the request like url and headers.
The second argument to our request listener function is the response object, res. We use this object to send data back to the browser. We can both send the string ‘ h i ‘ and end the connection to the browser with a single method call: res . end( ‘ hi ‘ ) .
At this point our server object has been created. If a browser request comes in, our request listener function will run, and we’ll send data back to the browser. The only thing left to do, is to allow our server object to listen for requests on a particular port:
server.listen(port)
Finally, for convenience, we print a message telling us that our server is running and which port it’s listening on:
console.log( Server listening on port ${port} )
And that’s all the code you need to create a high-performance web server with Node.js.
Of course this is the absolute minimum, and it’s unlikely that this server would be useful in the real world. From here we’ll begin to add functionality to turn this server into a usable JSON API with routing.
Serving JSON
When building web apps and distributed systems, it’s common to use JSON APIs to serve data. With one small tweak, we can change our server to do this.
In our previous example, we responded with plain text:
const server - http createServer(function (req, res) (
res.end('hi ')
In this example we’re going to respond with JSON instead. To do this we’re going to replace our request listener function with a new one:
const server - http.createServer(function (req, res) (
res.setHeader('Content-Type', 'application/json')
res.end(JSON.stringify({ text: ' hi ' , numbers: [1, 2, 3] }))
When building a production server, it’s best to be explicit with responses so that clients (browsers and other consumers of our API) don’t handle our data in unexpected ways (e.g. rendering an image as text). By sending plain text without a Content -Type header, we didn’t tell the client what kind of data it should expect.
In this example, we’re going to let the client know our response is JSON-formatted data by setting the Content -Type response header. In certain browsers this will allow the JSON data to be displayed with pretty printing and syntax highlighting. To set the Content -Type we use the res . set Header( ) 26 method:
res.setHeader('Content-Type', 'application/json')
Next, we use the same method as last time to send data and close the connection. The only difference is that instead of sending plain text, we’re sending a JSON-stringified object:
res.end(JSON.stringify({ text: 'hi', numbers: [1, 2, 3] }))
Run node 82 -server . js and navigate to http: // localhost:133d in your browser to see our new JSON response:
What our JSON response looks like
Not all browsers will pretty-print JSON. In this screenshot I’m using Firefox, but there are several extensions available for Chrome like JSON Formatter27 that will achieve the same result.
Of course, our API needs some work before it’s useful. One particular issue is that no matter the URL path we use, our API will always return the same data. You can see this behavior by navigating to each of these URLs:
- http://localhost:133T/a
- http://localhost:133T/fullstack
- http://localhost:133T/some/long/random/url.
If we want to add functionality to our API, we should be able to handle different requests to different url paths or endpoints. For starters, we should be able to serve both our original plain text response and our new JSON response.
Basic Routing
Not all client requests are the same, and to create a useful API, we should be able to respond differently depending on the requested url path.
We previously ignored the request object argument, req, and now we’re going to use that to see what url path the client is requesting. Depending on the path, we can do one of three things:
- respond with plain text,
- respond with JSON, or
- respond with a 404 “Not Found” error.
We’re going to change our request listener function to perform different actions depending on the value of req . ur 28 The url property of the req object will always contain the full path of the client request. For example, when we navigate to http: //localhost:1337 in the browser, the path is /, and when we navigate to http: //localhost: 1337/ full stack, the path is / full stack.
We’re going to change our code so that when we open http: //localhost:133T we see our initial plain-text “hi” message, when we open http://localhost: 1337/ j son we’ll see our JSON object, and if we navigate to any other path, we’ll receive a 404 “Not Found” error.
We can do this very simply by checking the value of req . url in our request listener function and running a different function depending on its value.
First we need to create our different functions — one for each behavior. We’ll start with the functions for responding with plain-text and JSON. These new functions will use the same arguments as our request listener function and behave exactly the same as they did before:
function respondText (req, res) {
res.setHeader('Content-Type', 'text/plain' )
res.end('hi ')
function respondJson (req, res) {
res.setHeader( 'Content-Type', 'application/json' )
res.end(JSON.stringify(( text: ' hi ' , numbers: [1, 2, 3] }))
The third function will have new behavior. For this one, we’ll respond with a 404 “Not Found” error. To do this, we use the res.writeHead( )’9 method. This method will let us set both a response status code and header. We use this to respond with a 404 status and to set the Content -Type to text/plain.
The 404 status code tells the client that the communication to the server was successful, but the server is unable to find the requested data.
After that, we simply end the response with the message ”Not Found” :
function respondNotFound (req, res) (
res.writeHead(404, { 'Content-Type' 'text/plain' })
res.end('Not Found')
What our server returns for paths that don’t exist
With our functions created, we can now create a request listener function that calls each one depending on the path in req.url:
const server = http.createServer(function (req, res) (
if (req.url --- '/') return respondText(req, res)
if (req. url === '/json ') return respondJson(req, res)
respondNotFound(req, res)
Now that we have basic routing set up, we can add more functionality to our server at different endpoints.
Dynamic Responses
Currently, our endpoints respond with the same data every time. For our API to be dynamic, it needs to change its responses according to input from the client.
For apps and services in the real-world, the API will be responsible for pulling data out of a database or other resource according to specific queries sent by the client and filtered by authorization rules.
For example, the client may want the most recent comments by user dguttman. The API server would first look to see if that client has authorization to view the comments of dguttman , and if so, it will construct a query to the database for this data set.
To add this style of functionality to our API, we’re going to add an endpoint that accepts arguments via query parameters. We’ll then use the information provided by the client to create the response. Our new endpoint will be /echo and the client will provide input via the input query parameter. For example, to provide “fullstack” as input, the client will use /echo? input=fuIIstack as the url path.
Our new endpoint will respond with a JSON object with the following properties:
- norma 1: the input string without a transformation
- shouty: all caps
- characterCount: the number of characters in the input string
- backwards: the input string ordered in reverse
To begin, we’ll first have our request listener function check to see if the request . url begins with /echo, the endpoint that we’re interested in. If it is, we’ll call our soon-to-be-created function respondEcho( ):
const server - http.createServer(function (req, res) (
if (req.url --- '/') return respondText(req, res)
if (req. url --- '/json ') return respondJson(req, res)
if (req. url.match(/’\/echo/)) return respondEcho(req, res)
respondNotFound(req, res)
Next, we create the respondEcho( ) function that will accept the request and response objects. Here’s what the completed function looks like:
function respondEcho (req, res) (
const ( input — " } - querystring.parse( req. url
split( ' #')
.slice(i)
.join( ' ')
res.setHeader('Content-Type', 'application/json')
res.end(
JSON.stringify({
normal: input,
shouty: input.toUpperCase(),
characterCount: input. length,
backwards: input
.split( ' ')
.reverse ( )
.join(' ')
The important thing to notice is that the first line of our function uses the querystring . parse() ’0 method. To be able to use this, we first need to use require( ) to load the querystring” core module. Like http, this module is installed with Node.js and is always available. At the top of our file we’ll add this line:
const querystring - require('querystring')
Looking at our respondEcho( ) function again, here’s how we use querystr i ng . parse( ) :
const ( input - ' ' } = querystring.parse(
req . url
.split('?')
.slice({)
.join( ' ')
We expect the client to access this endpoint with a url like /echo? input=some input. querystring . parse( ) accepts a raw querystring argument. It expects the format to be something like query1=value18query2=value2.
The important thing to note is that querystring . parse( ) does not want the leading ?. Using some quick string transformations we can isolate the input=some input part of the url, and pass that in as our argument.
querystring . parse( ) will return a simple JavaScript object with query param key and value pairs. For example, ( input: ‘ some input ‘ }. Currently, we’re only interested in the input key, so that’s the only value that we’ll store. If the client doesn’t provide an input parameter, we’ll set a default value of ‘ ‘
Next up, we set the appropriate Content – Type header for JSON like we have before:Next up, we set the appropriate Content – Type header for JSON like we have before:
res.setHeader( 'Content-Type', 'application/json' )
Finally, we use res . end( ) to send data to the client and close the connection:
res.end(
JSON.stringify({
normal: input,
shouty: input.toUpperCase(),
characterCount: input. length,
backwards: input
.split( ' ')
.reverse()
.join( ' ')
And there we have it, our first dynamic route! While simple, this gives us a good foundation for being able to use client-provided input for use in our API endpoints.
Our server can now respond dynamically to different inputs
File Serving
One of the most common uses of a web server is to serve html and other static files. Let’s take a look at how we can serve up a directory of files with Node.js.
What we want to do is to create a local directory and serve all files in that directory to the browser. If we create a local directory called pubIic and we place two files in there, index . html and ember. j pg, we should be able to visit http : // localhost : 1337/static/ index . html and http : //localhost : 1337/static/ember . to receive them. If we were to place more files in that directory, they would behave the same way.
To get started, let’s create our new directory public and copy our two example files, index . html and ember . jpg into it.
The first thing we’ll need to do is to create a new function for static file serving and call it when a request comes in with an appropriate req . url property. To do this we’ll add a fourth conditional to our request listener that checks for paths that begin with /static and calls respondstatic( ), the function we’ll create next:
const server = http.createServer(function (req, res) (
if (req.url --- '/') return respondText(req, res)
if (req. url === '/json ') return respondJson(req, res)
if (req. url.match(/’\/echo/)) return respondEcho(req, res)
if (req. url.match(/’\/static/)) return respondStatic(req, res)
respondNotFound(req, res)
And here’s the respondstatic( ) function we need:
function respondstatic (req, res) (
const filename '${ dirname}/public${req.url.split( '/static')[4]}’
fs.createReadStream(filename)
. on( ’ error ’ , ( ) =› respondNotFound(req, res ) )
. p1 pe( res )
The first line is fairly straightforward. We perform a simple conversion so that we can translate the incoming req . url path to an equivalent file in our local directory. For example, if the req . url is /static/ index . html, this conversion will translate it to public/ index . html.
Next, once we have our IiI ename from the first line, we want to open that file and send it to the browser. However, before we can do that, we need to use a module that will allow us to interact with the filesystem. Just like http and querystring, fs is a core module that we can load with require( ) . We make sure that this line is at the top of our file:
const is = require( ' is ' )
We use the As . createReadStream( ) method to create a Stream object representing our chosen file. We then use that stream object’s pipe() method to connect it to the response object. Behind the scenes, this efficiently loads data from the filesystem and sends it to the client via the response object.
We also use the stream object’s on( ) method to listen for an error. When any error occurs when reading a file (e.g. we try to read a file that doesn’t exist), we send the client our “Not Found” response.
If this doesn’t make much sense yet, don’t worry. We’ll cover streams in more depth in later chapters. The important thing to know is that they’re a very useful tool within Node.js, and they allow us to quickly connect data sources (like files) to data destinations (client connections).
Now run node 85 – server . js and visit http: // localhost:1337/static/ index . html in your browser. You should see the index . htmI page from /pubIic load in your browser. Additionally, you should notice that the image on this page also comes from the /public directory.
Now serving static files. Notice how the path /static is mapped to the local directory /public
Here’s what the full 05— server . js file looks like:
const fs = require('fs')
const http - require('http')
const querystring - require('querystring')
const port - process.env.PORT | | 1337
const server - http.createServer(function (req, res) (
if (req.url --- '/') return respondText(req, res)
if (req. url --- ' /json ') return respondJson(req, res)
if (req. url.match(/'\/echo/)) return respondEcho(req, res)
if (req. url.match(/’\/static/)) return respondStatic(req, res)
respondNotFound(req, res)
server.listen(port)
console. log(’Server listening on port ${port}’)
function respondText (req, res) {
res.setHeader( 'Content-Type', 'text/plain' )
res.end('hi ')
function respondJson (req, res) {
res.setHeader('Content-Type', 'application/json')
res.end(JSON.stringify({ text: 'hi ', numbers: [1, 2, 3] }))
function respondEcho (req, res) {
const { input = ' ' } = querystring.parse(
req. url
.split('#')
.slice(i)
.join( ' ')
res.setHeader('Content-Type', 'application/json')
res.end(
JSON.stringify({
normal: input,
shouty: input.toUpperCase(),
characterCount: input length,
backwards: input
.split(' ')
.reverse()
.join( ' ')
function respondstatic (req, res) (
const filename ${ dirname}/public${req.url.split( '/static')[1]}’
fs.createReadStream(filename)
.on( 'error ', () =› respondNotFound(req, res))
.pipe(res)
function respondNotFound (req, res) (
res.writeHead(404, { 'Content-Type" 'text/plain' })
res.end('Not Found')
Up until this point we’ve only used core modules that come built into Node.js. Hopefully we’ve shown how easy it is to create APIs and static file servers with the core modules of Node.js.
However, one of the best things about Node.js is its incredible ecosystem of third-party modules available on npm (over 842,000 as I write this). Next we’re going to show how our server would look using express, the most popular web framework for Node.js.
Express
Express is a “fast, unopinionated, minimalist web framework for Node.js” used in production environments the world over. express is so popular that many people have never tried to use Node.js without express.
Express is a drop-in replacement for the core http module. The biggest difference between using express and core http is routing, because unlike core http, express comes with a built-in router.
In our previous examples with core http, we handled our own routing by using conditionals on the value of req . url. Depending on the value of req . url we execute a different function to return the appropriate response.
In our /echo and /stati c endpoints we go a bit further. We pull additional information out of the path that we use in our functions.
In respondEcho( ) we use the querystring module get key-value pairs from the search query in the url. This allows us to get the input string that we use as the basis for our response.
In respondStatic( ) anything after /static/ we interpret as a file name that we pass to fs . createReadStream( ) .
By switching to express we can take advantage of its router which simplifies both of these use-cases. Let’s take a look at how the beginning of our server file changes:
const fs - require('fs')
const express = requ i re( ' express ' )
const port = process.env.PORT || 133T
const app = express()
app.get('/', respondText)
app.get('/json', responddson)
app.get('/echo', respondEcho)
app.get('/static/*', respondstatic)
app.listen(port, () =› console.log(’Server listening on port ${port}’))
First, you’ll notice that we add a require( ) for express on line 2:
const express - require('express')
Unlike our previous examples, this is a third-party module, and it does not come with our installation of Node.js. If we were to run this file before installing express, we would see a Cannot find module ‘ express ‘ error like this:
What it looks like when we try to use a third-party module before it’s installed
To avoid this, we need to install express using npm. When we installed Node.js, npm was installed automatically as well. To install express, all we need to do is to run n pm install express from our application directory. npm will go fetch express from its repository and place the express module in a folder called node_modules in your current directory. If node_modules does not exist, it will be created. When we run a JavaScript file with Node.js, Node.js will look for modules in the node_- modules folder.
Node.js will also look in other places for modules such as parent directories and the global npm installation directory, but we don’t need to worry about those for right now.
The next thing to notice is that we no longer use http . createserver( ) and pass it a request listener function. Instead we create a server instance with express( ) . By express convention we call this app.
Once we’ve created our server instance, app, we take advantage of express routing. By using app . get( ) we can associate a path with an endpoint function. For example, app . get( ‘ / ‘ , respondText) makes it so that when we receive an HTTP GET request to ‘/’, the respondText( ) function will run.
This is similar to our previous example where we run respondText( ) when req . url ‘ / ‘ . One difference in functionality is that express will only run respondText( ) if the method of the request is GET. In our previous examples, we were not specific about which types of methods we would respond to, and therefore we would have also responded the same way to POST, PUT, DELETE, OPTIONS, or PATCH requests.
Under the covers, express is using core http. This means that if you understand core http, you’ll have an easy time with express. For example, we don’t need to change our respondText( ) function at all:
function respondText (req, res) {
res.setHeader( 'Content-Type', 'text/plain' )
res.end('hi')
However, the nice thing about express is that it gives us a lot of helpers and niceties that can make our code more succinct. For example, because responding with JSON is so common, express adds a json( ) function to the response object. By calling res . json( ), express will automatically send the correct Content -Type header and stringify our response body for us. Here’s our respond0son( ) function updated for express:
function respondJson ( seq , res ) (
res . json ( ( text : ' h i ' , numbers : [1 , 2, 3 j ) )
Another difference with express routing is that we don’t have to worry about search query parameters when defining our routes. In our previous example we used a regular expression to check that req . url string started with /echo instead of checking for equality (like we did with / and /json). We did this because we wanted to allow for the user-created query parameters. With express, app . get( ‘ /echo ‘ , respondEcho) will call respondEcho( ) for any value of search query parameters — even if they are missing.
Additionally, express will automatically parse search query parameters (e.g. ? i nput=h i) and make them available for us as an object (e.g. ( i nput: ‘ hi ‘ }). These parameters can be accessed via req . query. Here’s an updated version of respondEcho( ) that uses both req . query and res . json( ) :
function respondEcho (req, res) {
const ( input - ' ' } - req.query
res.json({
normal: input,
shouty: input.toUpperCase(),
characterCount' input. length,
backwards: input
split('')
.reverse()
.join( ' ')
The last thing to notice about express routing is that we can add regex wildcards like * to our routes:
app.get('/static/*', respondstatic)
This means that our server will call respondstatic( ) for any path that begins with /static /. What makes this particularly helpful is that express will make the wildcard match available on the request object. Later we’ll be able to use req . params to get the filenames for file serving. For our respondstatic( ) function, we can take advantage of wildcard routing. In our /static/* route, the star will match anything that comes after /static/. That match will be available for us at req . params [8] .
Behind the scenes, express uses path-to-regexp”2 to convert route strings into regular expressions. To see how different route strings are transformed into regular expressions and how parts of the path are stored in req . params there’s the excellent Express Route Tester33 tool.
Here’s how we can take advantage of req . params in our respondstatic( ) function:
function respondstatic (req, res) (
const filename ${ dirname}/public/${req.params[0]}’
fs.createReadStream(filename)
.on('error ', () =› respondNotFound(req, res))
.pipe(res)
With just a few changes we’ve converted our core http server to an express server. In the process we gained more powerful routing and cleaned up a few of our functions.
Next up, we’ll continue to build on our express server and add some real-time functionality.
Real-Time Chat
When Node.js was young, some of the most impressive demos involved real-time communication. Because Node.js is so efficient at handling I/O, it became easy to create real-time applications like chat and games.
To show off the real-time capabilities of Node.js we’re going to build a chat application. Our application will allow us to open multiple browsers where each has a chat window. Any browser can send a message, and it will appear in all the other browser windows.
Our chat messages will be sent from the browser to our server, and our server will in turn send each received message to all connected clients.
For example, if Browser A is connected, Browser A will send a message to our server, which will in turn send the message back to Browser A. Browser A will receive the message and render it. Each time Browser A sends a message, it will go to the server and come back.
Of course, chat applications are only useful when there are multiple users, so when both Browser A and Browser B are connected, when either send a message to our server, our server will send that message to both Browser A and Browser B.
In this way, all connected users will be able to see all messages.
Traditionally, for a browser to receive data from the server, the browser has to make a request. This is how all of our previous examples work. For example, the browser has to make a request to / json to receive the JSON payload data. The server can not send that data to the browser before the request happens.
To create a chat application using this model, the browser would have to constantly make requests to the server to ask for new messages. While this would work, there is a much better way.
We’re going to create a chat application where the server can push messages to the browser without the browser needing to make multiple requests. To do this, we’ll use a technology called SSE (Server- Sent Events). SSE is available in most browsers through the EventSource API, and is a very simple way for the server to “push” messages to the browser.
What about websockets? Much like websockets, SSE is a good way to avoid having the browser poll for updates. SSE is much simpler than websockets and provide us the same functionality. You can read more about SSE and the EventSource API in the MDN web docs“.
Building the App
To set this up we’ll need to create a simple client app. We’ll create a chat . html page with a little bit of HTML and JavaScript. We’ll need a few things:
- JavaScript function to receive new messages from our server
- JavaScript function to send messages to the server
- HTML element to display messages
- HTML form element to enter messages
Here’s what that looks like:
<!DOCTYPE html>
<html lang-”en”>
<title>Chat App‹/title>
<link re I="sky T esheet" hreI=”Lachyons m in css">
<link re I="sky I esheet" hueI="chat . css" >
‹body›
‹div id="messages" ›
‹h4›Chat Messages‹/h4›
‹/div›
‹form id-”form”›
‹input
id=”input”
type-"text”
placeholder= ”¥our message . ” ›
‹/form›
‹script src-'chat js'›
</body>
This is some basic HTML. We load some stylesheets from our public directory, create some elements to display and create messages, and finally we load some simple client-side JavaScript to handle communication with our server.
Here’s what chat . js, our client-side JavaScript, looks like:
new window.EventSource('/sse').onmessage = function (event) {
window.messages. innerHTML +- ‹p›${event.data}‹/p›
window. form.addEventListener('submit' , function (evt) {
evt.preventDefault()
window. fetch( /chat?message-${window.input.value} )
window.input.value =
We’re doing two basic things here. First, when our soon-to-be-created /sse route sends new messages, we add them to the d iv element on the page with the id messages .
Second, we listen for when the user enters a message into the text box. We do this by adding an event listener function to our form element. Within this listener function we take the value of the input box window . input . value and send a request to our server with the message. We do this by sending a GET request to the /chat path with the message encoding in the query parameters. After we send the message, we clear the text box so the user can enter a new message.
While the client-side markup and code is very simple, there are two main takeaways. First, we need to create a /chat route that will receive chat messages from a client, and second, we need to create a /sse route that will send those messages to all connected clients.
Our /chat endpoint will be similar to our /echo endpoint in that we’ll be taking data (a message) from the url’s query parameters. Instead of looking at the ? input= query parameter like we did in/echo, we’ll be looking at ?message=.
However, there will be two important differences. First, unlike /echo we don’t need write any data when we end our response. Our chat clients will be receiving message data from /sse. In this route, we can simply end the response. Because our server will send a 2idid “OK” HTTP status code by default, this will act as a sufficient signal to our client that the message was received and correctly handled.
The second difference is that we will need to take the message data and put it somewhere our other route will be able to access it. To do this we’re going to instantiate an object outside of our route function’s scope so that when we create another route function, it will be able to access this “shared” object.
This shared object will be an instance of EventEmitter that we will call chatEmitter. We don’t need to get into the details yet, but the important thing to know right now is this object will act as an information relay. The EventEmitter class is available through the core events module, and EventEmitter objects have an emit(eventName [ , . args] ) method that is useful for broadcasting data. When a message comes in, we will use chatEmitter . emit( ) to broadcast the message. Later, when we create the /sse route we can listen for these broadcasts.
For this example, it’s not important to know exactly how event emitters work, but of you’re curious about the details, check out the next chapter on async or the official API documentation for EventEmitter”.
First, add a new route just like before:
app.get( '/chat', respondChat)
And then we create the corresponding function:
function respondChat (req, res) {
const ( message } = req.query
chatEmitter.emit('message', message)
res.end()
There are two things to notice here. First, access the message sent from the browser in similar way to how we access input in the /echo route, but this time use the message property instead. Second, we’re calling a function on an object that we haven’t created yet: chatEm idler . emit( ‘ message ‘ , message ) . If we were to visit http: //localhost: 1337/chat?message=hi in our browser right now, we would get an error like this:
ReferenceError: chatEmitter is not defined
To fix this, we’ll need to add two lines to the top of our file. First, we require the EventEmi tter class via the events module, and then we create an instance:
const EventEmitter = require('events')
const chatEmitter - new EventEmitter()
Now our app can receive messages, but we don’t do anything with them yet. If you’d like to verify that this route is working, you can add a line that logs messages to the console. After the chatEmitter object is declared, add a line that listens messages like this:
- const chatEmitter = new EventEmitter()
- chatEmitter.on(‘message’, console.log)
Then visit http: // Iocalhost:1337/chat?message=hello! in your browser and verify that the message “hello!” is logged to your terminal.
With that working, we can now add our /sse route that will send messages to our chat clients once they connect with the new window . Event Source( ‘ /sse ‘ ) described above:
app.get('/sse', respondSSE)
And then we can add our respondSSE( ) function:
function respondSSE (req, res) {
res.writeHead(200, {
'Content-Type' 'text/event-stream',
'Connection' 'keep-alive'
const onMessage = msg =› res.write( data: ${msg}\n\n )
chatEmitter.on('message', onMessage)
res.on('close', [unction ()
chatEmitter.off('message', onMessage)
Let’s break this down into its three parts:
- We establish the connection by sending a 200 OK status code, appropriate HTTP headers according to the SSE specification’6:
res.writeHead(200, {
'Content-Type' 'text/event-stream' ,
'Connection' 'keep-alive'
2. We listen for message events from our chatEmitter object, and when we receive them, we write them to the response body using res . write( ) . We use res . write( ) instead of res . end ( ) because we want to keep the connection open, and we use the data format from the SSE specification”:
const onMessage — msg =› res.write( data: ${msg}\n\n’)
chatEmitter.on( 'message', onMessage)
3. We listen for when the connection to the client has been closed, and when it happens we disconnect our onMessage( ) function from our chatEmitter object. This prevents us from writing messages to a closed connection:
res . on( ' close ’ , :function ( ) (
chatEmitter off('message', onMessage)
After adding this route and function, we now have a functioning real-time chat app. If you open http://localhost:1337/static /chat . html in multiple browsers you’ll be able to send messages back and forth:
If you open two browsers you can talk to yourself
Here’s our fully completed server file:
const fs = require('fs')
const express - require('express')
const EventEmitter - require('events')
const chatEmitter - new EventEmitter()
const port - process.env.PORT I I 1337
const app - express()
app.get('/', respondText)
app.get(' /json', responddson)
app.get('/echo', respondEcho)
app.get('/static/*', respondstatic)
app.get('/chat', respondChat)
app.get('/sse', respondSSE)
app listen(port, () -› console log(’Server listening on port ${port}’))
function respondText (req, res) {
res.setHeader( 'Content-Type', 'text/plain' )
res.end('hi ')
}
function respondJson (req, res) {
res.json({ text: 'hi', numbers: [4, 2, 3] })
}
function respondEcho (req, res) {
const ( input - ' ' } - req.query
res.json({
normal: input,
shouty: input.toUpperCase(),
characterCount: input.length,
backwards: input
.split(' ')
.reverse()
.join( ' ')
}
function respondstatic (req, res) (
const filename = ’${ dirname}/public/${req.params[0]}
fs.createReadStream(filename)
.on('error ', () =› respondNotFound(req, res))
.pipe(res)
}
function respondChat (req, res) (
const ( message } - req.query
chatEmitter.emit('message', message)
res.end()
}
function respondSSE (req, res) (
res.writeHead(200, {
'Content-Type' 'text/event-stream',
'Connection' 'keep-alive'
const onMessage = msg =› res.write( data: ${msg}\n\n )
chatEmitter.on('message', onMessage)
res.on('close', function ()
chatEmitter.off( 'message', onMessage)
})
}
function respondNotFound (req, res) (
res.writeHead(404, { 'Content-Type' 'text/plain' })
res.end('Not Found')
Wrap Up
In this chapter we’ve shown how Node.js combines the power and performance of a low-level language with the flexibility and maintainability of a high-level language.
We’ve created our first API, and we learned how to send different data formats like plain-text and JSON, respond dynamically to client input, serve static files, and handle real-time communication. Not bad!
Challenges
- Add a “chat log” feature: use fs . appendFile( ) 3‘ to write chat messages sent through the chat app to the filesystem.
- Add a “previous messages” feature: using the chat log, use fs . readFile( ) ’9 to send previous messages to clients when they first connect.