Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: nodejs/github-bot
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: main
Choose a base ref
...
head repository: nodejs/github-bot
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: resilient-tests-refactor2
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 6 commits
  • 16 files changed
  • 1 contributor

Commits on Apr 10, 2020

  1. Make flawed integration tests explode by fixing nock expectations

    Up until now, a handful of integration tests has been flawed because
    we didn't actually verify all nock.js expectations (read: nock.js scope)
    had been fulfilled. That was due to a desire to run several methods as
    a one-liner instead of one per line;
    
    ```
    filesScope.done() && existingRepoLabelsScope.done()
    ```
    
    That could make sense at first glance, but there's a very important
    catch we didn't think about when writing it; it requires `.done()`
    to return a truthy value or else what comes after `.done()` will not
    be executed.
    
    The `.done()` method on nock.js' scope either throws an exception if
    the expectation it represents has not been met, or returns
    `undefined` when all is good.
    
    That `undefined` value stops the subsequent methods on the same line to
    be executed. In other words, we have always just checked the first
    expectation, and none of the subsequent ones on the same line.
    
    The changes introduced in this commit executes these `.done()`s on their
    own line which sadly causes all of the related tests to explode at
    the moment.
    
    Why most of these expectations haven't been met is probably due to
    timing issues, since we don't wait for all the related code to have
    finished executing before the tests are run. As of now, the code has
    been written in a fashion that allows incoming HTTPS requests to be get
    their response ASAP, whilst the code pushing PR statuses to github.com
    or trigger Jenins builds are still running. That raises some challenges
    for our integration tests since they don't really know when the code is
    finished, meaning tests can run.
    
    Upcoming changes will fix that by ensuring incoming requests will get
    their response *after* all relevant code has succeeded or failed.
    That will introduce quite a big refactor job, but the end result will
    be a lot more robust tests that can be trusted.
    phillipj committed Apr 10, 2020
    Configuration menu
    Copy the full SHA
    249b062 View commit details
    Browse the repository at this point in the history

Commits on Apr 14, 2020

  1. Remove majority of fake timers use while testing

    As a precursor to running test assertions after we *know* all work has
    been done, most our use of fake timers gets removed here as it will
    not be needed anymore and even causes some challenges not easily solved
    when requests starts hanging due to work not having finished yet and
    the culprit is timers not running as they normally do.
    phillipj committed Apr 14, 2020
    Configuration menu
    Copy the full SHA
    e34a7c8 View commit details
    Browse the repository at this point in the history
  2. Reinitialise app before each label integration test to avoid label cache

    Several existing integration tests were failing because there was
    nock.js expectations created that wanted to see existing repository
    labels being fetched from github.com.
    
    Since we cache repository labels for an hour, the said retrieval of
    labels only happened in the first of label integration tests, whilst
    not in the subsequent ones since the labels had been put in a cache
    by then.
    
    We don't want to depend on the order of running tests, but rather start
    from a clean slate everytime, these changes ensure we reinitialise the
    `app` in every related test, resulting in clean labels cache everytime.
    phillipj committed Apr 14, 2020
    Configuration menu
    Copy the full SHA
    d7adb14 View commit details
    Browse the repository at this point in the history

Commits on May 24, 2020

  1. Respond to GitHub events webhook HTTPS requests when all work is done

    With the goal of making our integration tests more robust and actually
    reliable, these changes makes sure incoming requests related to GitHub
    events, gets responded to after all work is done.
    
    This makes a big difference for our test suite as we now know when it
    is safe to run assertions.
    
    Up until now we've responded to said requests immediately and kicked
    off work like setting PR labels and similar, asynchronously. Although
    that could make sense considering what's done in runtime, it makes life
    hard when wanting to verify the behaviour while testing since we don't
    know when all the works has completed.
    
    Conceptually these changes consists of:
    
    - Installing and using [events-async](https://www.npmjs.com/package/events-async)
      to know when all event listeners has completed
    - Promisify all the event listeners since the above npm package uses
      promises instead of callbacks to handle async tasks
    phillipj committed May 24, 2020
    Configuration menu
    Copy the full SHA
    1639a84 View commit details
    Browse the repository at this point in the history
  2. Move require() of all scripts from ./app.js to ./server.js

    These changes comes as a result of making tests more resilient.
    
    When running focused integration tests, we have some expectations of
    what kinds of requests are sent to github.com and what the end result
    should be after the bot has run all its magic.
    
    Those expectations are written to only have one specific script in mind,
    e.g. what should happen when pushing Jenkins updates or what should
    happen when labelling PRs.
    
    Up until now, all of our scripts has been setup by `require()`ing them
    inside `./app.js`. That means all has been setup and thereby reacted
    to incoming github.com webhook requests we've sent to the github-bot
    while testing. That's far from ideal because we cannot have "simple"
    and focused expectations about which github.com requests are expected
    and what they should get in response.
    
    Hence these changes; whilst running tests, we only `require()` the
    respective script the tests relates to.
    
    Although to keep the automagic of setting up all script when the
    github-bot is started in production, we move the lines of code that
    iterates over the available scripts into `./server.js` instead which
    is *only* executed in production, not by when testing.
    phillipj committed May 24, 2020
    Configuration menu
    Copy the full SHA
    47c3ecd View commit details
    Browse the repository at this point in the history
  3. Add missing timers and v6.x label in integration labels fixture

    Since several existing integration tests expected these labels
    to exist in the first page of labels results from github.com, but wasn't
    raised as an error until now that the tests has been refactored.
    phillipj committed May 24, 2020
    Configuration menu
    Copy the full SHA
    0652131 View commit details
    Browse the repository at this point in the history
Loading