CHAPTER-3
A Complete Server: Getting Started
We’re ready to start building a complete server API. We’re going to build a production e-commerce app that can be adapted to be used as a starting place for almost any company.
Soon, we’ll cover things like connecting to a database, handling authentication, logging, and deployment. To start, we’re going to serve our product listing. Once our API is serving a product listing, it can be used and displayed by a web front-end.
We’re going to build our API so that it can be used not only by a web front-end, but also by mobile apps, CLIs, and programmatically as a library. Not only that, we’re going to write our tests as we build it so that we can be sure that it works exactly as intended and won’t break as we make additions and changes.
Our specific example will be for a company that sells prints of images — each product will be an image. Of course, the same concepts can be applied if we’re selling access to screencasts, consulting services, physical products, or ebooks. At the core, we’re going to need to be able to store, organize, and serve our product offering.
Getting Started
To get started with our app, we’ll need something to sell. Since our company will be selling prints of images, we’ll need a list of images that are available for purchase. We’re going to use images from Unsplash.
Using this list of images, we’ll build an API that can serve them to the browser or other clients for display. Later, we’ll cover how our API can handle purchasing and other user actions.
Our product listing will be a JSON file that is an array of image objects. Here’s what our products . json file looks like:
"id"’ "trYlTJYATH0",
"description”’ "This is the Department and Water and Power building in Downtown \ Los Angeles (John Ferraro Building).",
"imgThumb"’ "https://images.unsplash.com/photo...",
"img"’ "https: //images. unsplash.com/photo.. . ”,
"link" : "https://unsplash.com/photos/trYl7JYATH0",
"userId”’ "X0ygkSu4Sxo",
"userName" "Josh Rose",
"userLink" ’ "https: //unsplash.com/Sjoshsrose",
"tags": [
"building",
"architecture",
"corner",
"black",
"dark",
"balcony",
"night " ,
"Tiles" ,
"pyramid" ,
"perspective" ,
"graphic " ,
"black and white" ,
"los angeles",
"angle" ,
"design"
As we can see, each product in the listing has enough information for our client to use. We can already imagine a web app that can display a thumbnail for each, along with some tags and the artist’s name.
Our listing is already in a format that a client can readily use, therefore our API can start simple. Using much of what we’ve covered in chapter 1, we can make this entire product listing available at an endpoint like http: // localhost:1337/products. At first, we’ll return the entire listing at once, but quickly we’ll add the ability to request specific pages and filtering.
Our goal is to create a production-ready app, so we need to make sure that it is well tested. We’ll also want to create our own client library that can interact with our API, and make sure that is well tested as well. Luckily, we can do both at the same time.
We’ll write tests that use our client library, which in turn will hit our API. When we see that our client library is returning expected values, we will also know that our API is functioning properly.
Serving the Product Listing
At this stage, our expectations are pretty clear. We should be able to visit http: // localhost: 1337/products and see our product listing. What this means is that if we go to that url in our browser, we should see the contents of products . json.
We can re-use much of what we covered in Chapter 1. We are going to:
- Create an express server
- Listen for a GET request on the /products route
- Create a request handler that reads the products . json file from disk and sends it as a response
Once we have that up, we’ll push on a bit farther and:
- Create a client library
- Write a test verifying that it works as expected
- View our products via a separate web app
Express Server
To start, our express server will be quite basic:
const fs = require('fs').promises
const path - require('path')
const express = require('express')
const port = process.env.PORT || 1337
const app = express()
app.get('/products', listProducts)
app. listen(port, () =› console. log( Server listening on port ${port} ))
async function listProducts (req, res) {
const productsFile = path.join( dirname, ' . ./products.json')
try {
const data - await fs.readFile(productsFile)
res.json(JSON.parse(data))
} catch (err) {
res.status(500).json({ error: err.message })
Going through it, we start by using require( ) to load express and core modules that we’ll need later to load our product listing: fs and path.
const fs = require('fs').promises
const path - require('path')
const express = require('express')
Next, we create our express app, set up our single route (using a route handler that we will define lower in the file), and start listening on our chosen port (either specified via environment variable or our default, 1337):
const port - process.env.PORT || 1337
const app = express()
app.get('/products', listProducts)
app.listen(port, () =› console.log(’Server listening on port ${port}’))
Finally, we define our route handler, listProducts( ). This function takes the request and response objects as arguments, loads the product listing from the file system, and serves it to the client using the response object:
async function listProducts (req, res) {
const productsFile - path.join( dirname, '../products.json')
try {
const data - await fs.readFile(productsFile)
res.json(JSON.parse(data))
} catch (err) {
res.status(500).json({ error: err.message })
Here, we use the core path module along with the globally available mdirname string to create a reference to where we are keeping our products . json file. dirname is useful for creating absolute paths to files using the current module as a starting point.
NOTE:-Unlike require( ), is methods like fs . readFile( ) will resolve relative paths using the current working directory. This means that our code will look in different places for a file depending on which directory we run the code from. For example, let’s imagine that we have created files /Users/fullstack/project/example. js and /Users/fullstack/project/data.txt, and in example.js we had As . readFiIe( ‘ data . txt’ , console . Iog). If we were to run node example . js from within /Users / fullstack /project, the data . txt would be loaded and logged correctly. However, if instead we were to run node project /examp l e . j s from /Users / fullstack, we would get an error. This is because Node.js would unsuccessfully look for data . txt in the /Users/ fullstack directory, because that is our current working directory.
NOTE:-dirname is one of the 21 global objects available in Node.js”. Global objects do not need us to use require( ) to make them available in our code. mdirname is always set to the directory name of the current module. For example, if we created a file /Users / fullstack /example . j s that contained console . log(mdirname) , when we run node example . js from /Users / fullstack, we could see that mdirname is /Users/ fullstack. Similarly, if we wanted the module’s file name instead of the directory name, we could use the mdirname global instead.
After we use fs . readFile( ), we parse the data and use the express method res . json( ) to send a JSON response to the client. res . json( ) will both automatically set the appropriate content-type header and format the response for us.
If we run into an error loading the data from the file system, or if the JSON is invalid, we will send an error to the client instead. We do this by using the express helper res . status( ) to set the status code to 500 (server error) and send a JSON error message.
Now when we run node 81/server -81 . js we should see our server start up with the message Server listing on port 1337, and we can verify that our product listing is being correctly served by visiting http: //localhost:1337/products in our browser.
With our API up, we can now build out different ways of using it. Since our API will only return JSON, it won’t care about how the data is being used or rendered. This means that we can have many different front-ends or clients use it.
Our API shouldn’t care whether the client is a web app hosted on a CDN, a command-line utility like curl, a native mobile app, a library being used by a larger code base, or an HTML file sitting on someone’s desktop. As long as the client understands HTTP, everything should work fine.
We’ll start by showing how we can get a standard web front-end using the API, and afterwards we’ll move on to create our own client library that could be used by other apps.
Web Front-End
Now that our server is successfully serving our product listing, we can use a web front-end to show it off.
To start we’ll create a simple HTML page that will fetch our endpoint and use console . log( ) to display the data:
‹!DOCTYPE html›
‹html lang-”en” dir-"ltr"›
‹head›
‹meta charset-"utf-8"›
<title> Fullstack Prinos ‹/title›
‹/head›
<body>
<scripts>
const url = 'http://localhost:133T/products'
fetch(url).then(res -› res.json()).then(products -› console.log(products))
‹/script›
‹/body›
‹/html›
If our server is running and we open this up in the browser, this is what we’ll see:
Our cross-origin request was blocked.
Unfortunately, because our HTML page was served from a different origin than our API, our browser is blocking the request. For security reasons, browsers will not load data from other origins/domains unless the other server returns the correct CORS (Cross-Origin Resource Sharing) headers. If we were serving the HTML directly from our server, we wouldn’t have this problem. However, it’s important to us that any web client can access our API — not only clients hosted by our API.
NOTE: If you'd like to learn about CORS in more detail, check out this guide on MDN’
Luckily for us, it’s easy enough to modify our server to return the correct headers. To send the necessary CORS header, we only need to add a single line to our route handler:
res.setHeader('Access-Control-Allow-Origin' , '*')
Now when our server responds, it will tell the browser that it should accept data from our server no matter which origin the HTML is loaded from. If we close our other server and run node 01/server -td2 . js and re-open 04/client/ index -01 . html in our browser, this is what we should see:
Our console.log() shows data from our API.
Success! Now that our front-end can receive data from our server, we can build out a simple interface. To start we’ll just create an element for each product, showing its thumbnail image, tags, and artist:
Using HTML to display our products
We won't cover the front-end code here, but it's available at code/src/03-complete-server/0í/client/índex.html.
Modularize
Now that we’ve got the basics up and running. Let’s modularize our app so that we can keep things tidy as we grow our functionality. A single-file production API would be very difficult to maintain.
One of the best things about Node.js is its module system, and we can leverage that to make sure that our app’s functionality is divided into individual modules with clear responsibility boundaries.
Until this point, we’ve only used require( ) to pull in functionality from Node.js core modules like http and third-party modules like express. However, more often than not, we’ll be using require( ) to load local modules.
To load a local module (i.e. another JS file we’d like to use), we need to do two things.
First, we call require( ) with a relative path argument, e.g. const addModule = require( ‘ . /add – module. js ‘ ) . If there’s a file add – moduIe . js in the same directory as our current file, it will be loaded into the variable addModule.
Second, for this to work properly, add – moduIe . j s needs to use oodu le . exports to specify which object or function will be made available. For example, if we want the variable addModule to be an add function such that we could use it like addModule( 1, 2) (and expect 3 as the result), we would make sure that add – module . js contains module . exports = ( n1 , n2) =› n1 + n2.
For our server, we want to pull all of our route handlers out into their own file. We’re going to call this file api . js, and we can make it accessible in our server . js file by using require( ‘ . /api ‘ ) (we’re allowed to omit . js if we’d like).
Then, in api . js, we will export an object with each of our route handlers as properties. As of right now, we only have a single handler, listProducts( ) so it will look like this module . exports ( listProducts } (we’ll have a function declaration elsewhere in the file).
Here’s our server . js:
const express - require('express')
const api - require('./api')
const port - process.env.PORT | | 4337
const app - express()
app.get('/products', api.listProducts)
app.listen(port, () =› console.log( Server listing on port ${port} ))
And here’s our apr . js:
const Products = require(' /products')
module.exports - {
listProducts
async function listProducts (req, res) {
res.setHeader('Access-Control-Allow-Origin', '*')
try
res.json(await Products. list())
} catch (err) (
res.status(500).json({ error: err.message })
As we can see, on the very first line, we use require( ) to pull in another module, . /products. This will be the data model for our products. We keep the data model separate from the API module because the data model doesn’t need to know about HTTP request and response objects.
Our API module is responsible for converting HTTP requests into HTTP responses. The module achieves this by leveraging other modules. Another way of thinking about this is that this module is the connection point between the outside world and our internal methods. If you’re familiar with Ruby on Rails style MVC, this would be similar to a Controller.
Let’s now take a look at our products module:
const fs = require('fs').promises
const path - require('path')
const productsFile - path.join( dirname, . ./products.json')
module.exports - {
list
async function list () {
const data = await fs.readFile(productsFile)
return JSON.parse(data)
Currently, our products module only has a single method, list( ). We should also notice that this module is general-purpose, we want to keep it so that it can be used by our API, a CLI tool, or tests.
Data flow of our modules
Another way to think of how we organize our modules is that the server module is on the outside. This module is responsible for creating the web server object, setting up middleware (more on this in a bit), and connecting routes to route handler functions. In other words, our server module connects external URL endpoints to internal route handler functions.
Our next module is the API. This module is a collection of route handlers. Each route handler is responsible for accepting a request object and returning a response. A route handler does this primarily by using model methods. Our goal is to keep these route handler functions high-level and readable.
Lastly, we have our model module. Our model module is on the inside, and should be agnostic about how it’s being used. Our route handler functions can only be used in a web server because they expect response objects and are aware of things like content-type headers. In contrast, our model methods are only concerned with getting and storing data.
This is important for testing (which we’ll cover more in depth later), because it simplifies what we need to pass to each tested method. For example, if our app had authentication, and we did not separate out the model, each time we tested the model we would have to pass everything necessary for authentication to pass. This would mean that we were implicitly testing authentication and not isolating our test cases.
As our app grows, we may choose to break our modules up further, and we may even create subfolders to store collections of related modules. For example, we might move the products . js module to a mode1s folder once we have other model modules. For now, there’s no reason to create a subfolder to store a single file.
Now that our app has a more solid structure, we can go back to building features!
Controlling our API with Query Parameters
As of right now, our API returns the same data each time. This means that there’s no reason for our API to exist — we could achieve the same result by storing our products . json file on a static file server or CDN.
What we need to do is to have our API respond with different data depending on the client’s needs, and the first thing a client will need is a way to request only the data that it wants. At this moment, our web front-end only displays 25 products at a time. It doesn’t need our entire product catalog of over 300 items returned on each request.
Our plan is to accept two optional query parameters, Iimit and of fset to give the client control over the data it receives. By allowing the client to specify how many results it receives at a time and how many results to “skip”, we can allow the client to scan the catalog at its own pace.
For example, if the client wants the first 25 items, it can make a request to /products? limit=25, and if the client wants the next 25 items, it can make a request to /products? limit=2580 fset=25.
If we had this feature built, here’s an example of how we could use it from the command line using curl and jq (more on these in a minute):
Grabbing the first product
Grabbing the second product
We’ll come back to using cur I and jq once we have the functionality built out and we can try these commands out on our own server.
Reading Query Parameters
We’ve already seen in chapter 1 that express can parse query parameters for us. This means that when a client hits a route like /products? limit=258 of fset=58, we should be able to access an object like:
limit: 25,
offset :50
In fact, we don’t have to do anything special to access an object of query parameters. express makes that available on the request object at req . query. Here’s how we can adapt our request handler function in . /api to take advantage of those options:
async function listProducts (req, res) {
res.setHeader( 'Access-Control-Allow-Origin' , '*')
const ( offset - 0, limit - 25 } - req.query
try (
res.json(await Products. list({
offset: Number(offset),
limit: Number(limit)
} catch (err) (
res.status(500).json({ error: err.message })
We create two variables offset and limit from the req . query object that express provides. We also make sure that they have default values if one or both are not set (e.g. in the case of the client hitting a route like /products or /products? limit=25).
It’s important to notice that we make sure to coerce the 1 inn it and offset values into numbers. Query parameter values are always strings.
This also highlights the main responsibility of our api . js module: to convert data from the express request object into the appropriate format for use with our data models and other modules.
It is our api . js module’s responsibility to understand the structure of express request and response objects, extract relevant data from them (e.g. filter and offset options), and pass that data along in the correct format (ensuring that filter and offset are numbers) to methods that perform the actual work.
Now, we only need to make a small change to our model to accept these new options:
async function list (opts — {})
const offset - 0, limit - 25 } - opts
const data - await fs.readFile(productsFile)
return JSON.parse(data).slice(offset, offset + limit)
At this point we can start up our server with node 83/server . js (make sure to close any other server you might have running first, or we’ll see an EADDR I NUSE error). Once our server is up, we can use our browser to verify that our new options are working by visiting http://localhost:1337/products? Iimit=1 & of fset
Checking to see that our limit works correctly in the browser
Using curl and jq
In the beginning of this section, we referenced another way to check our API: using the command- line tools curl and jq.
curl is installed by default on many systems and is an easy way for us to interact with APIs. The simplest way to use curl is to run curl [url ] where [url ] is the URL of the data we want to access.
curl provides a ton of different options that change its behavior to be useful in different situations. For our current use-case, using the browser makes things pretty easy, but curl has some advantages. First, if we’re already in the terminal, we don’t have to switch to a browser. Second, by using options like – G, – d, and – – data – urlencode, curl makes entering query parameters much cleaner. If we’re using the browser we would type http : // localhost : 1337/products? limit=258 of fset=58 .For curl we would do:
This makes it much easier to see the different parameters and will keep things clean when we want to pass more options. In the future, we can also take advantage of the – -data – urlencode option to automatically escape values for us.
NOTE: We won’t go too in depth on curl here, but we can quickly explain the -sG and -d options that we use here. The -s option will hide the progress bar so that our output is only the JSON response. If we were using cur l to download a large file, the progress bar might be useful, but in this case it’s just a bunch of text that we’re not interested in. The -G option forces curl to perform a GET request. This is required because by default curl will perform a POST, when using -d (and – -data -url encode) option. Finally, we use the -d option to provide query parameters to the url. Instead of using -d we could just append ?param=value & anotherParam=anotherValue to the url, but it’s nicer to put each pair on a separate line and to have curl keep track of the ? and & for us.
If we were to run the previous command, we receive 25 product entries, but they wouldn’t be formatted in a human-friendly way. However, another advantage of curl is that we can easily combine it with other command-line tools like jq’2.
jq is a very popular command-line tool that we can use to easily format JSON responses. We can also use it to slice, filter, map, and transform — it’s an incredibly versatile tool for working with JSON APIs. For now, we’re just interesting in pretty-printing for clarity. If you’re interested in playing with more features of jq, check out the jq tutorial”.
Here’s the same command piped to jq for pretty-printed output:
Pretty formatting and color
Using Postman
Beyond the browser and command-line, there are other popular options for testing our APIs. One of the most popular ways is to use Postman“, a free, full-featured, multi-platform API development app.
Postman allows us to use a GUI to create and edit our API requests and save them in collections for later use. This can be useful once a project matures and there are many different endpoints, each having many different options.
Using Postman to nicely format our request parameters.
Web Front-end
After testing our new feature, we can modify our web front-end to allow the user to page through the app. By keeping track of limit and offset in the front-end, we can allow the user to browse the products one page at a time. We won’t cover it here, but by providing control over Iimit and offset, our endpoint would also support “infinite scroll” behavior as well.
Our new UI supports “Previous” and “Next” buttons thanks to our new functionality
Product Filtering
By this point we should have a good sense of how to add functionality to our app. Let’s use the same approach to add the ability to filter results by tag.
Since we can accept a specified tag filter via query parameters, we don’t need to modify our route. This means that we don’t need to edit server . js. However, we do need to modify api . js so that we can change our route handler to look for a tag query parameter.
Our IistProducts( ) route handler will take the value of the tag query parameter and pass it through to the model (which we’ll modify in a bit).
Here’s the updated IistProducts ( ) handler:
async function listProducts (req, res) {
res.setHeader( 'Access-Control-Allow-Origin ' , '*')
const ( offset = 0, limit = 25, tag } = req.query
try (
res . json( await Products.list( (
offset : Number ( offset ) ,
limit: Number(limit),
tag
} catch (err) (
res.status(500).json( { error : err.message })
All we need to do here is to make sure that we pull the tag property from the req . query query object and pass it as an option to our Products . list( ) method. Because all query parameters come in as strings, and we expect tag to be a string, we don’t need to do any conversions here. We pass it straight through.
We’ve updated our handler, but as of right now, our Products . Iist( ) method has not been changed to accept the tag option. If we were to test this endpoint, filtering wouldn’t work yet — our model method would just ignore the extra option.
We can now change our Products . list( ) method to add the new functionality:
async function list (opts = })
const offset - 0, limit - 25, tag } - opts
const data - await fs.readFile(productsFile)
return JSON.parse(data)
filter((p, i) =› !tag l l p.tags.indexOf(tag) ›- 0)
.slice(offset, offset + limit)
By adding a filter( ) before our sIice( ) we restrict the response to a single tag.
For now we’re not using a database, so we are handling the filtering logic in our app. Later, we’ll be able easily change this method to leverage database functionality — and we won’t even have to modify our route handler.
Once our model has been updated, we can start the app (node B4/server . js) and verify that our tag filter works as expected. This time we’ll use curl and jq:
In addition to being able to pretty-print our result, jq can also modify the output. Here we used . [ ] | {description , tags} as the option. This means for that every item in the input array (our API's JSON response) . [ ] , jq should output an object with just the description and tags (| (description , tags} ). It's a pretty useful shorthand when interacting with APIs so that we only get the data that we're interested in looking at. See jq's documentation” for more information.
We found the only one with the tag “hounds.”
Fetching a Single Product
Our app is coming along now. We have the ability to page through all of our products and to filter results by tag. The next thing we need is the ability to get a single product.
We’re going to create a new route, handler, and model method to add this feature. Once we’re done we’ll be able to handle the following curl command:
curl -s http://localhost:133T/products/0ZPSX_mQ3xI | jq
And we’ll receive a single-product response:
Receiving a single product with curl
To get this going we’ll first add a route to our server . js file:
const express = require( 'express' )
const api = require(' ./api-01 ')
const port - process.env.PORT | | 1337 const app - express()
app.get('/products', api.listProducts)
app.get('/products/:id', api.getProduct)
const server - app.listen(port, () =›
console.log( Server listening on port ${port} )
Notice that the route that we’re listening for is /products / : id. The : id named wildcard that means that we want express to listen to any route that starts with /products / and is two levels deep. This route will match the following urls, /products/1, /products/abc, or /products/some- long – string . In addition to matching these URLs, express will make whatever is in the place of : i d available to our route handler on the req . params . i d property. Therefore, if /products/abc is accessed, req . params . i d will be equal to ‘ abc ‘
It’s important to note that /products/abc /123 and /products /abc /123/xyz will NOT match, because they each have more than two levels.
If we were to run this now, we’d get an error Error : Route . get( ) requires a callback function but got a [object Undefined] . This is expected because we haven’t created the route handler yet. We’ll do that next.
Front-to-back or back-to-front? When adding functionality, we can choose different ways of going about it. In this book we're going to approach things by going front-to-back, which means that we'll attempt to use methods and modules before they exist. An alternative would be to build back-to-front. If we were to build back-to-front, we would start with the lowest level methods, and then move up by then building out the function that calls it. In this case if we were using the back-to-front method, we would start by adding a method to the model in products . j s, then we would build the route handler in api . js, and finally, we'd add the route listener in server . js. At first glance this might make more sense, if we don't build the lowest levels first, we're going to get errors when we try to use a method or module that doesn't exist. It's important to realize that getting the errors that we expect is a sign of progress to show that we're on the right track. When we don't see the error that we expect, we'll quickly know what we did wrong. However, if we start with the lowest level changes, we have to wait until the very end to know if everything is set up correctly, and if it's not, it will be much more difficult to know exactly where along the way we made a mistake.
We now add a getProduct( ) method to our api . js module:
There are two new things to notice here. First, we’re accepting a new argument next( ) in our handler function, which we use in the case that our product is not found, and second, we’re using req . params to get the desired product ID from the request.
We’ll go into more detail on next( ) in a few, but for right now the key thing to know is that all express handler functions are provided req (request object), res (response object), and next( ) — a callback that signifies that the “next” available handler should be run.
We haven’t defined any middleware yet (we’ll also talk about this in a bit), so at this point we haven’t specified any other handlers that can run. This means that next( ) will trigger the default “not found” handler built into express.
To see this default handler in action, use cur I to hit a route that we’re not listening for:
1 0 curl -s http://localhost:133T/path-that-doesnt-exist
2 <!DOCTYPE html>
3 ‹ html lang= "en" ›
4 ‹head›
5 ‹meta charset-”utf-8”›
6 ‹title›Error‹/title›
7 ‹/head›
8 ‹body›
9 ‹pre›Cannot SET /path-that-doesnt-exist ‹/pre›
10‹/body›
11‹/html›
This HTML response is generated by express. Soon, we’ll define our own middleware handler to run instead so that we return JSON instead of HTML. We’ll also create some other handlers that will help us clean up our code by reducing repetition.
For now, we can test our new handler with the expectation that we’ll get an error because Products . get( ) is not yet defined:
1 0 curl -s http://localhost:133T/products/0ZPSX_mQ3xI
2 {”error”:"Products.get is not a function”}
The last step is to add Products . get( ) to our products . js module:
return null
Nothing fancy here. We iterate through the products until we find the one with the matching ID. If we don’t find one, we return null.
At this point we can use curl again fetch a product, but this time we won’t get an error:
0 curl -s http://localhost:133T/products/0ZPSX_mQ3xI | jq
”id”’ ”0ZPSX_mQ3xI”,
”description”: ”Abstract background”,
Next() and Middleware
We’ve successfully added functionality so that we can retrieve a single product, but there’s some cleaning up we can do. If we look at our api. js file, we see that we have some repetition:
async function getProduct (req, res, next) {
res.setHeader('Access-Control-Allow-Origin', '*')
const ( id } - req.params
try (
const product - await Products.get(id)
if (!product) return next()
res.json(product)
} catch (err) (
res.status(500).json({ error: err.message })
async function listProducts (req, res) {
res.setHeader('Access-Control-Allow-Origin', '*')
const ( of Used = 0, limit= 25, bag ) = req . query
try (
res.json(await Products. list({
offset' Number(offset),
limit: Number(limit),
tag
} catch (err) (
res.status(500).json({ error: err.message })
Both get Product( ) and listProducts ( ) set the Access -Control -Allow -Origin header for CORS and use try/catch to send JSON formatted errors. By using middleware we can put this logic in a single place.
In our server . js module, we’ve used app . get( ) to set up a route handler for incoming GET requests. app . get( ) , app . post( ) , app . put( ) , and app . delete( ) will set up route handlers that will only execute when requests come in with a matching HTTP method and a matching URL pattern. For example, if we have the following handler set up:
app.post(' /upload', handleUpload)
handleUpload( ) will only run if an HTTP POST request is made to the /up load path of the server.
In addition to request handlers, we can set up a middleware functions that will run regardless of the HTTP method or URLs. This is very useful in the case of setting that CORS header. We can create a middleware function that will run for all requests, saving us the trouble of adding that logic to each of our routes individually.
To add a middleware function to our server . js module, we’ll use app . use( ) like this:
app . use( middleware . cows )
app . get( /products ’ , api . listProducts )
app . get( /products/ : id ’ , api . getProduct)
We’ll also create a new module middleware . js to store our middleware functions (we’ll add some more soon).
Here’s what our middleware . cors ( ) function looks like:
function cors (req, res, next) {
const origin - req.headers.origin
res.setHeader('Access-Control-Allow-Origin', origin 11 '*')
res.setHeader(
'Access-Control-Allow-Methods',
POST, GET, PUT, DELETE, OPTIONS, XMODIFY
res.setHeader( 'Access-Control-Allow-Credentials' , 'true')
res.setHeader('Access-Control-Max-Age', '86400')
res.setHeader(
'Access-Control-Allow-Headers',
'X-Requested-With, X-HTTP-Method-Override, Content-Type, Accept'
next( )
We’ve taken the same res . set Header( ) call from our two route handlers and placed it here. The only other thing we do is call next( ).
next( ) is a function that is provided to all request handlers; this includes both route handlers and middleware functions. The purpose of next( ) is to allow our functions to pass control of the request and response objects to other handlers in a series.
The series of handlers is defined by the order we call functions on app. Looking above, we call app . use ( middIeware . cors ) before we call app . get( ) for each of our route handlers. This means that our middleware function,middleware . cors ( ) will run before either of those route handlers have a chance to. In fact, if we never called next( ), neither of those route handlers would ever run.
Currently in our server . j s module, we call app . use(middleware . cors ) first, so it will always run. Because we always call next( ) within that function, the following route handlers will always have a chance to run. If a request is made with a matching HTTP method and URL, they will run.
Another way of thinking about middleware is that they are just like route handlers except that they match all HTTP methods and URLs. Route handlers are also passed the next( ) function as their third argument. We’ll talk about what happens if a route handler calls next( ) in a minute.
For now, let’s verify that our middleware is working as expected by removing our res . set Header( ) calls from our api . js module:
const Products - require(' /products')
module.exports - {
getProduct,
listProducts
async function getProduct (req, res, next) {
const ( id } - req.params
try (
const product - await Products.get(id)
if (!product) return next()
res.json(product)
} catch (err) (
res.status(500).json({ error: err.message })
async function listProducts (req, res) {
const ( offset - 0, limit - 25, tag } - req.query
try (
res.json(await Products. list({
offset: Number(offset),
limit: Number(limit),
tag
} catch (err) {
res.status(500).json( { error : err.message })
If we run node B5/server -B2. js and we use curl to check the headers, we should see the following:
Notice that the CORS header, Access -Control – Allow – Origin is still set even though we removed it from our api . j s module. Our middIeware . js module now handles it for all requests.
Error Handling With Next()
Now that we’ve centralized setting the CORS header, we can do the same with error handling. In both of our api . j s module’s route handlers, we use a try/catch along with res . status( 588) . json( ( error : err . message } ) . Instead of doing this in each of our route handlers, we can delegate this behavior to a single middleware function.
There’s actually a different type of middleware function for handling errors. We can create an error handler middleware function by defining it with an extra first argument to receive the error. Once we do that, any time that next( ) is called with an argument (e.g. next(err)), the next available error handler middleware function will be called.
This means that if we have an internal error that we don’t expect (e.g. we can’t read our products file), we can pass that error to next( ) instead of having to deal with it in each route handler. Here’s what our error handling middleware function looks like:
function handleError (err, req, res, next) {
console.error(err)
if (res.headersSent) return next(err)
res.status(500).json({ error: ' Internal Error' })
As we can see, the first argument is err, and this function will receive the error object that is passed to next( ) along with the other standard arguments, seq and res. We’ve also made the decision to log errors using console . error( ) and not to provide that information in the response to the client. Error logs often contain sensitive information, and it can be a security risk to expose them to API clients.
There are possible problematic interactions between the default express error handler and custom error handlers. We've skipped it here, but best practice is to only respond if res . headerssent is fa he within a custom error handler. See Error Handling“ in the official express documentation for more information.
In addition to catching internal errors, we’ll also use middleware to deal with client errors such as 404 Not Found. express comes with a default “Not Found” handler. If no routes match, it will be called automatically. For our app, this isn’t desirable — we want all of our responses to be JSON, and the default handler will respond with HTML. To test out the default handler, make a request to our server with a URL that we’re not listening for:
0 curl http://localhost:133T/not-products
‹!DOCTYPE html›
<html lang-”en”>
‹head›
‹meta charset-"utf-8">
<title›Error‹/title›
‹/head>
‹body>
‹pre>Cannot GET /not-products‹/pre>
‹/body>
‹/html>
To fix this, we’ll add our own “catch all” middleware to handle this case:
function notFound (req, res) {
res.status(404).json({ error: ' Not Found' })
This is not an error handler, so we do not look for an error object as the first argument. Instead this is normal middleware function that we’ll set to run after all routes have been checked. If no route handlers match the request’s URL, this middleware function will run. Additionally, this function will also run if any route handler calls next( ) (with no argument). We do this by calling app . use( ) after we set up our route handlers:
app.use(middleware.cors)
app.get('/products', api. listProducts)
app.get('/products/: id', api.getProduct)
app.use(middleware.handleError)
app.use(middleware.notFound)
By looking at the order, we can easily see the logic for how express will handle any requests that come in. First, all requests will run through middleware . cors ( ) . Next, the request URL will be matched against our two route handlers. If no route handler matches the request URL, middleware . notFound( ) will run, but if there is, the corresponding route handler will be run. While that route handler is running, if there’s an error and next( err ) is called,middIeware . handIeError( ) will run. Alternatively, if a route handler calls next( ) with no argument,middleware . notFound( ) will run.
Eliminating Try/Catch
There’s one more improvement that we can make to our apr . js module. Currently, each of our route handlers have their own try/catch, and while it is an improvement that they can leverage our middeware . handleError( ) function to log the error and send the appropriate JSON response to the client, it would be nicer if that behavior was automatic. To do this we’ll create a helper function that will wrap our route handlers so that any error will automatically be caught and passed to next( ).
We won't cover this helper function in detail here, but the code is available in 05/lib/auto-catch.js.
Using this new autoCatch( ) function, we can simplify our api . js module:
const Products - require(' ./products')
const autoCatch - require(' /lib/auto-catch')
module.exports - autoCatch({
getProduct ,
listProducts
async function getProduct (req, res, next) {
const ( id } - req.params
const product = awa it Products . get( id)
i I ( ! product) return next( )
res.json(product)
async function listProducts (req, res) {
const ( offset = 0, limit = 25, tag } = req.query
const products = await Products.list({
offset: Number(offset),
limit: Number( limit),
tag
res.json(products)
Now each handler function can be much simpler because it doesn’t need to have its own try/catch. We can now test our API to make sure that our error handling middleware is behaving as expected:
B curl http://loclhost:133T/not - products
( ” error ” : ” Not Found ” )
D mv products.json products.json.hidden \
&& curl http://localhost:133T/products \
&& mv products.json.hidden products.json
(”error”:”Internal Error”}
At this point we have a good foundation to continue to add more routes and functionality to our API, and we’ll be able to write simpler route handlers because of the logic that we’ve centralized as middleware.
Next up, we’ll start building routes that will make our API even more interactive by responding to HTTP POST, PUT, and DELETE methods.
HTTP POST, PUT, and DELETE
This far we’ve only been responding to HTTP GET requests. GET requests are simple and are the most common, but they limit the information that a client can send to an API.
In this section we’ll quickly show how we can create route handlers for the client to create, update, and delete products using HTTP POST, PUT, and DELETE requests. For now, we’re only going to concentrate on receiving and parsing data from the client. Then, in the next chapter we’ll cover how to persist that data.
When a client sends a GET request, the only information transferred is the URL path, host, and request headers. Furthermore, GET requests are not supposed to have any permanent effect on the server. To provide richer functionality to the client we need to support other methods like POST.
In addition to the information that a GET request sends, a POST can contain a request body. The request body can be any type of data such as a JSON document, form fields, or a movie file. An HTTP POST is the primary way for a client to create documents on a server.
We’ll want our admin users to be able to use the client to create new products. For now, all users will have this power, and we’ll cover authentication and authorization for admin users in a later chapter. To create a new product, our user will use their client to send an HTTP POST to the endpoint /products with the new product as the JSON body.
To listen for this HTTP POST, we add a new line to our server . js module using app . post( ) :
app.post('/products', api.createProduct)
We haven’t created api . createProduct( ) yet, so if we run our server, we’ll see an error. Let’s create that method now:
async :funct i on createProduct (req , res , next ) (
console . I og( ' request body : ' , req . body)
res . j son( req . body)
What we’re ultimately interested in is storing a new product for use later. However, before we can store a product, we should be able to access it. Since we’re expecting the client to send the product as a JSON document as the request body, we should be able to see it via console . log( ), and send it back via res . json( ) .
Run the server with node 06/server -01 . js, and let’s test it out using curl:
B cur I —X POST \
- d ’ ( "descr i pt ion " "test product" , "T i nk " "https : //examp l e . com" ] ’ \
-H "Content-Type: application/json" \
http://localhost:1337/products
Here's a quick explanation of the options that we just passed to curl: -X Post sets the request method to POST, - d sets the body of the request, and - H sets a custom request header.
Hmm… We were expecting to see our object sent back to us, but we didn’t get any output. Also, if we check our server log, we can see that req . body is undefined. What gives?
express doesn’t parse request bodies for us automatically. If we wanted to, we could read data from the request stream manually, and we parse it after the transfer finished. However, it’s much simpler to use the express recommended middleware’7.
Let’s add that to our server . js module:
const express = require('express')
const bodyParser - require('body-parser')
const api - require('./api')
const middleware = require(' . /middleware' )
const port = process.env.PORT | | 1337
const app = express ( )
app.use(middleware.cors)
app.use(bodyParser.json())
app.get('/products', api. listProducts)
app.post('/products', api.createProduct)
And now we can try curl again:
0 curl -X POST \
-d '(”description”’”test product”, ”link”’”https://example com”}' \
-H ”Content-Type: application/json” \
http://localhost:1337/products
(”description”:"test product",”link”:”https://example com”}
There we go. We’re now seeing the response we expect.\
With that working, we can add route handlers to edit and delete products:
app.put('/products/:id', api.editProduct)
app.delete( ' /products/: id' , api.deleteProduct)
And we can stub out their handlers:
async function editProduct (req, res, next) {
ZZ console . log(seq . body)
res . json( req . body)
async function deleteProduct (req, res, next) {
res.json({ success: true })
Wrap Up
In this chapter we’ve started our API, and we built it to the point where it can serve up product listings for various clients (web front-end, cur1, Postman, etc…). Our API supports paging and filtering options, and for now, we’re serving the data from a JSON file on disk. We’ve implemented custom middleware to handle CORS headers and custom error handling, and we can parse JSON request bodies.
At this point, our server is read-only. We have added routes for creating, editing, and deleting products, but as of right now they are only stubs. In the next chapter we’re going to make them functional, and to do that we’re going to cover persistence and databases.