The Goal

In this post I will be explaining my personal methodology which I use to try to keep my documentation up to date and accurate. I will be demonstrating the use of Postman and the Newman-CLI tool hooked into an npm script to be run to validate our API endpoints are operating as they should.

The Problem

I've used many a library as a professional developer over the years, and the make or break for selecting one over another really boils down to how thorough their documentation is. Not being able to figure out an API or needing to guess how data is returned is painful, to say the least. I would get frustrated with achieving 80% of my desired functionality through use of a certain library, only to find myself tucking tail to look for an alternative after exhausting the docs created by the library writers. In my own work, projects I have received over time have grown larger and more complex and I began to get overwhelmed with maintaining my own documentation! I've work with many tools for testing and documentation over the years (Swagger, Apiary, Stoplight, Postman, numerous doc generator scripts, Mocha, Chai, Jest, Enzyme, Detox, XCUITests, to name a few...) and will share my current process now as I think it has made me work better both individually and as a member of a software development team.

Starting from TDD

Test-Driven Development is a widely-known and used development methodology borne from the desire to prevent software regression from rearing its ugly head. It is a great practice and really helps to enable developers to keep their tests are up to date. When tests are consistently maintained we can be a little more confident of our ability to deliver rock-solid code into production. In my (not so humble) opinion, every developer and piece of software would benefit from adopting some form of TDD into their development workflow.

The biggest complaint with TDD is maintaining tests. It can be a pretty big ego killer for a developer to write tests: "That's a job for QA man.. I'm an app dev!", but we dont realize we write tests all the time as developers without even thinking about it. I have used Postman for many years as a GUI for cURL. The cycle would be routine:

  1. Write REST API endpoint in backend MVC framework of choice
  2. Construct appropriate request method (GET, POST, PUT, DELETE...) in the Postman GUI
  3. Hit send
  4. Watch Results
  5. Check for Errors
  6. Repeat

Over the past couple months I have been opting to use apimatic.io's API description transformer to convert a postman collection into OpenAPI format which is compatible with either hosted or self-hosted swagger docs.

The hosted docs for the demo application I am using can be found here.

I used this excellent article on Building a RESTful API with Koa and Postgres as a starting point.

Writing Tests

Michael Herman does an excellent job of explaining very clearly how to integrate tests, seeding, and migrations with the use of the following libraries:

  1. Koa
  2. Chai
  3. Knex

Knex operates in a fashion almost identical to laravel's php artisan utility, enabling us to run migrations and databases seeders rather quickly. You can refer to Herman's article directly for explanations on how his seeders and migrations are structured. Seeding data allows to control this aspect of our test environment.

Chai operates on the HTTP endpoints and is written in a natural-language like tone, using the should library.

The syntax is fairly straightforward and easy to learn. The file Herman provides us also includes a nice set up and teardown, which runs our migrations and seeders before the tests are run:


beforeEach(() => {
    return knex.migrate.rollback()
    .then(() => { return knex.migrate.latest(); })
    .then(() => { return knex.seed.run(); });
  });

  afterEach(() => {
    return knex.migrate.rollback();
  });

Running npm test will give us our test results as output:

These are nice, but it is still a  bit removed from my mental model of * code, execute, repeat*.

Recently I noticed the Tests tab in the Postman app.

Some quick googling led me to their documentation, of course.

The tests are "sandboxed JavaScript code" snippets that can hold user state through the use of environment and global variables. An example I have implemented is my attempted conversion of Herman's PUT movie test:

  describe('PUT /api/v1/movies', () => {
    it('should return the movie that was updated', (done) => {
      knex('movies')
      .select('*')
      .then((movie) => {
        const movieObject = movie[0];
        chai.request(server)
        .put(`/api/v1/movies/${movieObject.id}`)
        .send({
          rating: 9
        })
        .end((err, res) => {
          // there should be no errors
          should.not.exist(err);
          // there should be a 200 status code
          res.status.should.equal(200);
          // the response should be JSON
          res.type.should.equal('application/json');
          // the JSON response body should have a
          // key-value pair of {"status": "success"}
          res.body.status.should.eql('success');
          // the JSON response body should have a
          // key-value pair of {"data": 1 movie object}
          res.body.data[0].should.include.keys(
            'id', 'name', 'genre', 'rating', 'explicit'
          );
          // ensure the movie was in fact updated
          const newMovieObject = res.body.data[0];
          newMovieObject.rating.should.not.eql(movieObject.rating);
          done();
        });
      });
    });

And my postman "equivalent starts from the POST movie test where I set an environment variable:

pm.test("returns data POSTed", function () {
    var jsonData = pm.response.json();
    let movieId = jsonData.data[0].id;
    pm.environment.set("POST_MOVIE_ID", movieId)
   console.log(pm.environment.get("POST_MOVIE_ID"));
});

I retrieve POST_MOVIE_ID in the test for PUT through its URL call:

http://localhost:1337/api/v1/movies/{{POST_MOVIE_ID}}

A more complete comparison:

The chai test for the GET all movies endpoint

  describe('GET /api/v1/movies', () => {
    it('should return all movies', (done) => {
      chai.request(server)
      .get('/api/v1/movies')
      .end((err, res) => {
        // there should be no errors
        should.not.exist(err);
        // there should be a 200 status code
        res.status.should.equal(200);
        // the response should be JSON
        res.type.should.equal('application/json');
        // the JSON response body should have a
        // key-value pair of {"status": "success"}
        res.body.status.should.eql('success');
        // the JSON response body should have a
        // key-value pair of {"data": [3 movie objects]}
        res.body.data.length.should.eql(3);
        // the first object in the data array should
        // have the right keys
        res.body.data[0].should.include.keys(
          'id', 'name', 'genre', 'rating', 'explicit'
        );
        done();
      });
    });
  });

Postman equivalent:

// there should be a 200 status code
pm.test("Status code is 200", function () {
    pm.response.to.have.status(200);
});
// the response should be JSON
try { 
    responseJSON = JSON.parse(responseBody); 
    tests['response is valid JSON'] = true;
}
catch (e) { 
    responseJSON = {}; 
    tests['response is valid JSON'] = false;
}
// the JSON response body should have a
// key-value pair of {"data": [3 movie objects]}
pm.test("three movies", function () {
    pm.expect(JSON.parse(responseBody).data.length === 3);
});

Using the newman-cli tool we can actually use our exported postman collection and environment file to run our tests programatically.

I updated my package json to inclue a newman-local script:

"newman-local": "newman run postman/koa-movies-api.postman_collection.json -e postman/koa-movies.postman_environment.json -r cli,html --reporter-html-export tests/newman/report.html --ignore-redirects"`

Now I can run tests fairly easily from the terminal without writing too many additional JavaScript tests:


Newman cli test output

Thoughts & Next Steps

Postman testing doesnt (yet) seem robust or full enough to be a full replacement for chai; I am still trying to see how to implement branching logic and nested requests, and hope to add examples in the follow up to this article.

I believe that making the shift from Test Driven Development to Documentation Driven Development can really emphasize code maintainability. If code is well documented and well maintained we can all benefit as both users and developers.

In the follow up to this tutorial, I will hook up continuous integration to run our newman tests when a new commit is pushed to git.