Netlify - Add Google Analytics to Hexo on Production-Deploy but Not on Branch-Deploy

How would you know if your website is seen or used? Google Analytics.

For a while I’ve been using Google Analytics to track my blog.
It’s free and it gives me insight on which blog post others find interesting too.
But…I’ve been doing it wrong.

Tools and Workflow

My blog framework is Hexo and I have Continuous Deployment working with the help of Netlify.
When I commit code to master branch, my blog will automatically deploy for your viewing pleasure.
On feature branches, which represents a new post, I configured Netlify’s Branch-Deploy to make my draft post available via a special URL.
I like the Branch-Deploy feature, as I can quickly share it with a friend to get feedback but more importantly, I can compare how the webpage renders on different devices.

My Netlify Build & Deploy => Continuous Deployment => Deploy context settings:

Deploy all branches pushed to Netlify

Adding Google Analytics (GA) to Hexo

Assuming you have a Universal Analytics tracking ID (UA number), adding GA to Hexo blog is very simple.
(If you don’t have a UA number, see resources below to create one)

Just add your Universal Analytics (UA) number to your theme’s ../themes/[YourThemeName]/_config.yml file,
and also add a new ../themes/[YourThemeName]/layout/_partial/google_analytics.ejs file to Hexo.

Add Google Analytics to Hexo

Here is the content of the google_analytics.ejs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<% if (theme.google_analytics){ %>
<!-- Google Analytics -->
<script type="text/javascript">
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', '<%= theme.google_analytics %>', 'auto');
ga('send', 'pageview');

</script>
<!-- End Google Analytics -->
<% } %>

This works well and below is a sample of the Google Analytics code Hexo will generate in the head section of each page.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<head>
...
...
<!-- Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-7445566-1"></script>
<script>
window.dataLayer = window.dataLayer || [];

function gtag() {
dataLayer.push(arguments);
}
gtag('js', new Date());
gtag('config', 'UA-7445566-1');
</script>
<!-- Google Analytics End-->
</head>

My Mistake

After I’ve added GA to my blog, and I browsed the Branch-deploy URL (the test site), I realised GA recorded it as a real-time-active user.

I don’t want any stats recorded when I’m in Test/Draft, I’m only interested in Production stats…oops :)

My mission

How to get Hexo to generate Google Analytics (GA) on Production deployments only.

A quick hexo generate revealed when I remove UA number from the _config.yml, Hexo will remove all GA code from the pages too.

1
2
# Miscellaneous 
google_analytics:

I figured, if I can use Environment Variables to represent the Google Analytics UA tracking number, I would be able add the UA number to the _config.yml for Production deployments and remove it on Branch deployments.

My Solution

After a few minutes in the Netlify documentation, I discovered the awesome Stream Editor linux command sed.
This command will search and replace file content.

Now I have all the info to solve my issue.

Add GA_UA_PLACEHOLDER place holder text in the _config.yml which will be replaced on deployment.

1
2
# Miscellaneous 
google_analytics: GA_UA_PLACEHOLDER

Then, add the Environment Variables and sed (string replace) steps into the netlify.toml.

1
2
3
4
5
6
7
8
9
10
11
[build]
base = "blog"
publish = "blog/public"

[context.production]
environment = { GA_UA_PLACEHOLDER = "UA-7445566-1" }
command = "printenv && sed -i s/GA_UA_PLACEHOLDER/${GA_UA_PLACEHOLDER}/g ./themes/landscape/_config.yml && hexo generate && cp ../prod_headers.txt public/_headers --verbose"

[context.branch-deploy]
environment = { GA_UA_PLACEHOLDER = "" }
command = "printenv && sed -i s/GA_UA_PLACEHOLDER/${GA_UA_PLACEHOLDER}/g ./themes/landscape/_config.yml && hexo generate && cp ../branch_headers.txt public/_headers --verbose"

[context.production]: All steps under this section will execute when commit was detected on master branch.

environment = { GA_UA_PLACEHOLDER = "UA-7445566-1" }: Will set Environment Variable GA_UA_PLACEHOLDER to UA-7445566-1 when changes detected on master.

[context.branch-deploy]: All steps under this section will execute when commit was detected on any feature branch.

environment = { GA_UA_PLACEHOLDER = "" }: Will set Environment Variable GA_UA_PLACEHOLDER to empty string when changes detected on feature deployment.

printenv: Will list all the Environment Variables, just a debug thing I do :)

sed -i s/GA_UA_PLACEHOLDER/${GA_UA_PLACEHOLDER}/g ./themes/landscape/_config.yml: Will take _config.yml file as input and search for all occurrences of GA_UA_PLACEHOLDER and replace it with Environment Variable ${GA_UA_PLACEHOLDER}.

cp ../_headers.txt public/_headers: Will copy branch specific header file _header.txt to public folder with filename _headers, so that Netlify can apply the headers.

After these changes, the head section on each page for feature branches were GA free.
For master commits, the GA “magically” appeared.
Finally, my stats will reflect reality.

Use it…don’t use it :)

PS: Please feel free to leave a comment on how to improve this approach.

Google Analytics Resources

How to add a new website in Google Analytics
Google - Get started with Analytics

Approval Tests - Another Tool for My Software Test Belt

(Tech: C#)

Problem

I was working on a project where we refactored a particular class.
The class had a .Get() function, returning an Object.
Our mission was to implement a new class with the same .Get(), but the returning Object’s data would be retrieved differently.
This is a classic refactor, and thank goodness the team before us had sufficient unit tests :)

The unit tests were very easy to follow and with a few tweaks we had it implemented in no time.

Our next step was to implement integration tests.
This would definitely boost our confidence in our changes by comparing the old implementation to the new implementation’s result.

But how can I compare the objects without a lot of work from our team?

Plan of Attack

My plan was to serialize the objects to JSON string, and then compare the string results - see example below.
I would depend on the very popular Newtonsoft.Json to make it happen.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// Don't think too much when you see the example,
// it's aimed to be very simple to get the point across :)

using Newtonsoft.Json;

// Xunit(pick your poison, MSTest, NUnit) integration test might look something like this:
public class AddressRepositoryTests
{
[Fact]
public void CompareOldAddressRepoWithNewAddressRepoTest()
{
// arrange
var oldAddressRepository = new OldAddressRepository();
var newAddressRepository = new OldAddressRepository();

// act
var oldAddresses = oldAddressRepository.Get();
var newAddresses = oldAddressRepository.Get();

// serialise
string oldAddressesJSON = JsonConvert.SerializeObject(oldAddresses, Formatting.Indented);
string newAddressesJSON = JsonConvert.SerializeObject(newAddresses, Formatting.Indented);

// assert
oldAddressesJSON.ShouldBe(newAddressesJSON);
}
}

This approach achieved the objective, and if I had automated integration tests running as part of my build pipeline, this would have been good enough.

In our case, we did not run the integration tests on every commit as we only needed a temporary test solution to prove our changes were working. At a certain point, when we were confident our code works, the plan was to remove the old implementation completely.

We also used Scientist.net to test in production…wait…WHAT!!!
Before you jump up and down, we are not testing in production :)
With Scientist.net, the old code will still executes but the new code will run as an experiment in production.
But that’s for another day :)

Back to my test scenario - because we ran our integration test on demand, I have found it difficult to track the difference in the output generated by the assertion tool, in my case Shouldly.

This was not a shortcoming of Shouldly, rather the Object returned was complex and the JSON string value returned was just to big.

My solution was to copy the oldAddressesJSON result into a file, and repeat the same thing with the newAddressesJSON so that I can compare the two files with a diff tool like P4Merge.
(Choose your Diff tools for Window or Mac - Kudos to Tobias Günther).

I found it easier to compare the result side-by-side via a diff tool as opposed to a text blob list from top to bottom in the generated output string. I preferred good old fashioned eyeballing the differences :)

Very soon I got bored with the copy-text-to-file and started thinking of a better way…

One of my team mates dropped Approval Tests, and I quickly spiked it.

Approval Test Approach

I found ApprovalTests.Net nuget package, compatible with: Nunit, MsTest, Xunit, MBUnit.

Package Manager install command: PM> Install-Package ApprovalTests.

It makes testing easier when one needs to eyeball the differences.
ApprovalTests.Net uses your default installed diff tool, in my case P4Merge, to show the differences visually.
I had it up and running very quickly just by looking at their examples.

The magic attribute is: [UseReporter(typeof(DiffReporter))].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
using ApprovalTests; 
using ApprovalTests.Reporters;
using Data.Models;
using Newtonsoft.Json;
using System;
using System.Linq;
using Xunit;

namespace BuilderLibraryTests.ApprovalTests
{
public class SimpleApprovalTests
{
[Fact]
[UseReporter(typeof(DiffReporter))]
public void ThisIsASimpleTests()
{
var address = new Address()
{
PostCode = "6000",
StreetNumber = 12,
Suburb = "Huge Jackman :)" //aka Hugh Jackman
};

var jsonAddress = JsonConvert.SerializeObject(address);
Approvals.VerifyJson(jsonAddress); // Verify specific things, in our case JSON.

/*
//Another approach would be to use `Approvals.Verify(jsonAddress)`,
//which is just a string compare.
//The "disadvantage" with this approach is;
//one has to format your json result with 'Formatting.Indented'(see below).
//If you don't do it, you will get the object serialized in one line.
//Where with `Approvals.VerifyJson(jsonAddress);` you get the formatting for free.
//As always, use whatever gets the job done quickest.

var jsonAddress = JsonConvert.SerializeObject(address, Formatting.Indented);
Approvals.Verify(jsonAddress);
*/

// few other cool features I'd like to use when the scenario calls for it.
// Approvals.VerifyHtml(htmlString);
// Approvals.VerifyPdfFile(pdfFilePath);
// Approvals.VerifyXml(xmlString);
}
}
}

On First Run of Approval Tests

When you run the test the first time, the diff tool will pop up showing two file:

  • *.received.* - This is the actual result when your code was run.
  • *.approved.* - This is the expected result, the first time it will be empty as you still have to decide what is expected content.

First test run output result

You will also see two new files in your project.
In my example I verified json results Approvals.VerifyJson(jsonAddress), hence .approved.*json**.
It could be different depending on your Approvals.VerifyX method used.

Files created after running test.

The *.approved.* file is the one you want to add to your source control.
Think of it as your test’s answer sheet.
The content saved in that file will be used by the Approval Test library to verify the actual result.

Create Content For Approved File

How do we create the *.approved.* content?

Easy - you simply close your diff tool and it will prompt you to save the differences from the actual result into the*.approved.* file.

Files created after running test.

Now that you’ve saved the file, you have a valid *.approved.* file.

When you re-run your test, it will pass and the *.received.* file will automatically be removed by the Approval Tests library.

With a green test, we can commit the *.approved.* file with confidence to source control.

Test Workflow:
  • make changes to your code
  • update test, if required
  • run your test - let’s assume failed test
    • *.approved.* was created file
    • *.received.* was created file
  • diff tool will shout mismatch
  • save the change to the *.approved.* file
  • re-run your test
  • on green test, add/commit *.approved.* changes
  • on green test, *.received.* was automatically removed
  • new code is now covered

Suggestion: Configure your source control to ignore files with matching pattern *.received.*.
I’ve added *.received.* to my .gitignore file.

Approval Tests In Action

Quick sample of Approval Tests in action.

Conclusion

Approval tests are available in many flavours - C#, VB.Net, Java, PHP, Ruby or NodeJS.
A quick google will help you out.

I was very happy with the results Approval Tests brought to our project and would definitely recommend it for consideration in your next eyeball scenario.

Use it…don’t use it :)

Resources to broaden your knowledge

Apply Builder Pattern to Unit Tests

(Tech: C#)

Problem

I was working on a team (Team-B) where we were tasked with writing tests for a new implementation of the codebase.
The old implementation was fully covered, and our mission was to mimic the old tests in the new ones.

Sounds fairly simple, butttt…

The codebase is owned by another team (Team-A), and they have not contributed to the new implementation.

*How do we create unit tests which will speak to Team-A when they read the new tests from the imposters? :) *

Use the Builder Pattern

Here’s wikipedia’s definition on the builder pattern:

The intent of the Builder design pattern is to separate the construction of a complex object from its representation.
By doing so the same construction process can create different representations.

Because Team-A doesn’t have the knowledge of creating valid data objects for the new implementation, we used the builder pattern and created methods to abstract away the complex data structure setup.

Think of it from this point, if someone else wants to add more tests they should not have to dig deep into business logic on how to create a valid object. There should be a method they can call, and they will get a valid object every time.

One advantage I found was; the longer we worked on the test suite, the faster we got. This was because we started re-using builders created by other team members.

When I wanted to get a complex object in a valid state before I focus on my test, I’ll simply “new up” a builder and look for some interesting method on it. If not found, I knew it was a new scenario and method to be created.

Let’s look at some samples.

Samples

Disclaimer: These are all fictional - apply the principle and don’t get too bogged down in the example :)

(Samples github repo)

Let’s say another team is tasked with creating an Employee validator.
This team has no idea on how to create a valid employee.
Luckily another awesome team has created EmployeeBuilder, which they can re-use and focus on the logic of testing the validator.

Here is a requirement: An employee is valid when any address has an Australian postcode

Let’s dream up some test code for the employee validator…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[Fact]
public void EmployeeValidatorTests_EmployeeAddressShouldBeVALIDWhenAnyPostCodeFromAustralia()
{
// arrange
var builder = new EmployeeBuilder();

// no need to worry on how an Australian employee is created.
// imagine a very complex object here, with a lot of setup under the "With" method.
var employee = builder.WithEmployeeFromAustralia()
.Build();

// act
//system under test
var sut = new EmployeeValidator(employee);

// assert
// team focus on testing validator logic,
// not spending time figuring out how to create a complex employee object.
sut.IsValidAustralianAddress().ShouldBeTrue();
}

Let’s check out the EmployeeBuilder class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
public class EmployeeBuilder : BuilderBase<Employee, EmployeeBuilder>
{
AddressBuilder _addressBuilder = new AddressBuilder();

public EmployeeBuilder WithEmployeeFromAustralia()
{
_concreteObject = new Employee()
{
Name = "Bruce",
LastName = "Ozzy",

Addresses = new List<Address>
{
// default values
_addressBuilder.WithAustralianAddress().Build(),
}
};

return this;
}
}

public class BuilderBase<TBuildResult, TBuilder> : IBuilder<TBuildResult, TBuilder>
where TBuildResult : class, new()
where TBuilder : class, IBuilder
{
protected TBuildResult _concreteObject = new TBuildResult();

public TBuildResult Build()
{
return _concreteObject;
}

public TBuilder With(Action<TBuildResult> setAction)
{
setAction?.Invoke(_concreteObject);
return this as TBuilder;
}

public TBuilder With<TRequestBuilder>(Action<TBuildResult, TRequestBuilder> setAction) where TRequestBuilder: class, IBuilder, new()
{
setAction?.Invoke(_concreteObject, new TRequestBuilder());
return this as TBuilder;
}
}

public interface IBuilder { /* maker to indicate a builder object */ }

public interface IBuilder<TBuildResult, TBuilder> : IBuilder
where TBuildResult : class, new()
where TBuilder : class, IBuilder
{
TBuildResult Build();

/// <summary>
/// A generic way to set properties
/// </summary>
TBuilder With(Action<TBuildResult> setAction);

TBuilder With<TRequestBuilder>(Action<TBuildResult, TRequestBuilder> setAction) where TRequestBuilder : class, IBuilder, new();
}

You’ll notice I have TBuilder With(Action<TBuildResult> setAction) on the BuilderBase class.

It allows me code like this:

1
2
3
4
var builder = new EmployeeBuilder();

var actual = builder.With(x => x.Name = "Samurai Jack")
.Build();

I like it because one can easily see from the test what the intent is.
Abstracting this away into a method will hide the fact that the Name property changed.

There is also
TBuilder With<TRequestBuilder>(Action<TBuildResult, TRequestBuilder> setAction) where TRequestBuilder : class, IBuilder, new()
on the BuilderBase class.

It allows me code like this:

1
2
3
4
5
6
7
8
var builder = new EmployeeBuilder();
var employee = builder
.WithEmployeeFromAustralia()
.With<AddressBuilder>((e, addressBuilder) => e.Addresses.Add(addressBuilder
.WithSouthAfricanAddress()
.Build())
)
.Build();

My suggestion would be to have all builder classes only in your test project.
No need to have builders in production code as the data will come from the real source.

More samples can be found on my github repo.

(Btw, I’m using Shouldly for my assertions)

TestStack.Dossier - Check it out!

This is more in-depth “framework” and something I will definitely use on bigger projects.
If I needed more complex logic in my current implementation of the BuilderBase, I’ll upgrade to Dossier no questions asked.

TestStack.Dossier provides you with the code infrastructure to easily and quickly generate test fixture data for your automated tests in a terse, readable and maintainable way using the Test Data Builder, anonymous value and equivalence class patterns.

Conclusion

All the logic of creating complex objects is kept in one place, will help with future maintenance.
Writing tests in a fluent way makes the test more readable and also shows testing intent better.
Writing tests got faster as the builders library grew.
Cross functional teams don’t need the in-depth knowledge of a specific area, if there is a builder they can focus on their tasks.

I had fun using the builder pattern to create data objects in a fluent(ish) way for unit testing.
Will use it again in future.

Use it…don’t use it :)

VSCode Multiline Editing Magic

I used Visual Studio Code’s multiline editing today and was pleasantly surprised.

This feature is available in a few editors, pick your favorite one and be more productive :)

Use it…don’t use it :)

Add Angular Tests to Netlify Deployment

For quite some time, Netlify has been my (free) tool of choice to deploy my blog and other personal projects.
It’s very simple to set-up and their documentation is also easy to follow.

But how do you make your Angular tests part of the Netlify build pipeline?

Most of the examples suggest: Build command: ng build --prod
What I need: Build command: ng test && ng build --prod

Netlify build settings sample

My first issue, ng test creates an instance of Chrome browser as specified in the karma.conf.js.
Pretty simple to solve, run Headless Chrome.

But how do I get Headless Chrome installed on Netlify’s Linux build environment? Puppeteer.

I solved it with a quick npm install puppeteer.

I’ve updated my package.json:

1
2
3
4
5
6
7
8
9
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build --prod",
"test": "ng test",
"test-headless": "ng test --watch=false --browsers=ChromeHeadless",
"lint": "ng lint",
"e2e": "ng e2e"
},

I’ve tested everything locally and Build command: npm run-script test-headless && npm run-script build executed successfully in my Windows environment.

After I’ve pushed the new changes, the build failed - “CHROME_BIN” env variable error.

1
2
3
4
5
6
7
8
9
10
11
12
{"os":"linux","arch":"x64"})
11:33:41 AM: audited 34037 packages in 15.642s
11:33:41 AM: found 0 vulnerabilities
11:33:41 AM: > beukes-bunch-health-tracker@0.0.0 test /opt/build/repo/ui
11:33:41 AM: > ng test
11:34:02 AM: 10 03 2019 03:34:02.050:INFO [karma-server]: Karma v4.0.1 server started at http://0.0.0.0:9876/
11:34:02 AM: 10 03 2019 03:34:02.053:INFO [launcher]: Launching browsers ChromeHeadless with concurrency unlimited
11:34:02 AM: 10 03 2019 03:34:02.060:INFO [launcher]: Starting browser Chrome
11:34:02 AM: 10 03 2019 03:34:02.061:ERROR [launcher]: No binary for Chrome browser on your platform.
11:34:02 AM: Please, set "CHROME_BIN" env variable.
11:34:02 AM: npm
11:34:02 AM: ERR! Test failed. See above for more details.

Seems like karma is looking for the path of the chrome executable.

Second issue, how do I set CHROME_BIN env variable path?

Thanks to the folks at Puppeteer they made it easy by just checking the build logs:

1
2
3
4
5
1:00:59 PM: > puppeteer@1.13.0 install /opt/build/repo/ui/node_modules/puppeteer
1:00:59 PM: > node install.js
1:01:07 PM: Chromium downloaded to /opt/build/repo/ui/node_modules/puppeteer/.local-chromium/linux-637110
1:01:07 PM: > node-sass@4.11.0 postinstall /opt/build/repo/ui/node_modules/node-sass
1:01:07 PM: > node scripts/build.js

I’m not fluent in Linux, but I found the printenv command.
It prints all or part of the environment variables.

New plan: Build command: printenv && npm run-script test-headless && npm run-script build

With a bit of trail and error I discovered:
CHROME_BIN=/opt/build/repo/ui/node_modules/puppeteer/.local-chromium/linux-637110/chrome-linux/chrome

From the printenv I’ve learned that env variable PWD=/opt/build/repo/ui.

I could use the PWD variable like this:
CHROME_BIN=${PWD}/node_modules/puppeteer/.local-chromium/linux-637110/chrome-linux/chrome

But, linux-637110 smells like an appended build number.

I used this hack to dynamically look up the linux-xxxxxx folder name:
ls -x -1 ${PWD}/node_modules/puppeteer/.local-chromium.

My ugly CHROME_BIN env variable (but it works):
CHROME_BIN=${PWD}/node_modules/puppeteer/.local-chromium/$(ls -x -1 ${PWD}/node_modules/puppeteer/.local-chromium)/chrome-linux/chrome

One can set a Linux env variable with export.

My final plan:
Build command: export CHROME_BIN=${PWD}/node_modules/puppeteer/.local-chromium/$(ls -x -1 ${PWD}/node_modules/puppeteer/.local-chromium)/chrome-linux/chrome && printenv && npm run-script test-headless && npm run-script build

To save me the time logging into Netlify and update the build command, I’ve added the netlify.toml file.

1
2
3
4
[build]
base = "ui"
command = "export CHROME_BIN=${PWD}/node_modules/puppeteer/.local-chromium/$(ls -x -1 ${PWD}/node_modules/puppeteer/.local-chromium)/chrome-linux/chrome && printenv && npm run-script test-headless && npm run-script build"
publish = "ui/dist/beukes-bunch-health-tracker"

GREEN - build log:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
...
...
7:13:24 PM: LANGUAGE=en_US:en
7:13:24 PM: YARN_VERSION=1.3.2
7:13:24 PM: rvm_ruby_string=ruby-2.3.6
7:13:24 PM: GIMME_GO_VERSION=1.10
7:13:24 PM: CHROME_BIN=/opt/build/repo/ui/node_modules/puppeteer/.local-chromium/linux-637110/chrome-linux/chrome
7:13:24 PM: GOCACHE=/opt/buildhome/.gimme_cache/gocache
7:13:24 PM: GEM_PATH=/opt/buildhome/.rvm/gems/ruby-2.3.6:/opt/buildhome/.rvm/gems/ruby-2.3.6@global
7:13:24 PM: > ng test --watch=false --browsers=ChromeHeadless
7:13:36 PM: 10 03 2019 11:13:36.774:INFO [karma-server]: Karma v4.0.1 server started at http://0.0.0.0:9876/
7:13:36 PM: 10 03 2019 11:13:36.776:INFO [launcher]: Launching browsers ChromeHeadless with concurrency unlimited
7:13:36 PM: 10 03 2019 11:13:36.780:INFO [launcher]: Starting browser ChromeHeadless
7:13:46 PM: 10 03 2019 11:13:46.808:INFO [HeadlessChrome 74.0.3723 (Linux 0.0.0)]: Connected on socket POrD2-FtM3GJq9JLAAAA with id 8678984
7:13:51 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 0 of 14 SUCCESS (0 secs / 0 secs)
7:13:52 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 1 of 14 SUCCESS (0 secs / 0.85 secs)
7:13:52 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 2 of 14 SUCCESS (0 secs / 1.562 secs)
7:13:53 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 3 of 14 SUCCESS (0 secs / 1.821 secs)
7:13:53 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 4 of 14 SUCCESS (0 secs / 1.85 secs)
7:13:53 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 5 of 14 SUCCESS (0 secs / 2.103 secs)
7:13:53 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 6 of 14 SUCCESS (0 secs / 2.332 secs)
7:13:53 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 7 of 14 SUCCESS (0 secs / 2.363 secs)
7:13:54 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 8 of 14 SUCCESS (0 secs / 2.92 secs)
7:13:54 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 9 of 14 SUCCESS (0 secs / 2.938 secs)
7:13:54 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 10 of 14 SUCCESS (0 secs / 3.219 secs)
7:13:54 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 11 of 14 SUCCESS (0 secs / 3.485 secs)
7:13:54 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 12 of 14 SUCCESS (0 secs / 3.494 secs)
7:13:54 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 13 of 14 SUCCESS (0 secs / 3.501 secs)
7:13:54 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 14 of 14 SUCCESS (0 secs / 3.657 secs)
7:13:54 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 14 of 14 SUCCESS (3.682 secs / 3.657 secs)
7:13:54 PM: TOTAL: 14 SUCCESS
7:13:54 PM: TOTAL: 14 SUCCESS
7:13:55 PM: > ng build --prod
7:15:24 PM: Date: 2019-03-10T11:15:24.219Z
7:15:24 PM: Hash: 64ca61b254dcfd5acdc1
7:15:24 PM: Time: 85158ms
7:15:24 PM: chunk {0} runtime.a5dd35324ddfd942bef1.js (runtime) 1.41 kB [entry] [rendered]
7:15:24 PM: chunk {1} es2015-polyfills.4a4cfea0ce682043f4e9.js (es2015-polyfills) 56.4 kB [initial] [rendered]
7:15:24 PM: chunk {2} main.3abc8440f2ae7fd24e18.js (main) 1.59 MB [initial] [rendered]
7:15:24 PM: chunk {3} polyfills.7e5c029b78344f3b7d7c.js (polyfills) 41.1 kB [initial] [rendered]
7:15:24 PM: chunk {4} styles.e230360cfdba5044352a.css (styles) 61.9 kB [initial] [rendered]
...
...

RED - build log:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
...
7:06:09 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 8 of 14 SUCCESS (0 secs / 2.921 secs)
7:06:09 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 9 of 14 SUCCESS (0 secs / 2.938 secs)
7:06:10 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 10 of 14 SUCCESS (0 secs / 3.244 secs)
7:06:10 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0) AuthenticationService should be read tokens FAILED
7:06:10 PM: Expected 'Bearer' to be 'BearerRuan'.
7:06:10 PM: at UserContext.<anonymous> (src/app/services/authentication.service.spec.ts:65:31)
7:06:10 PM: at TestBedViewEngine.push../node_modules/@angular/core/fesm5/testing.js.TestBedViewEngine.execute (node_modules/@angular/core/fesm5/testing.js:1822:1)
7:06:10 PM: at UserContext.<anonymous> (node_modules/@angular/core/fesm5/testing.js:1991:29)
7:06:10 PM: at ZoneDelegate../node_modules/zone.js/dist/zone.js.ZoneDelegate.invoke (node_modules/zone.js/dist/zone.js:391:1)
7:06:10 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 11 of 14 (1 FAILED) (0 secs / 3.671 secs)
7:06:10 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0) AuthenticationService should be read tokens FAILED
7:06:10 PM: Expected 'Bearer' to be 'BearerRuan'.
7:06:10 PM: at UserContext.<anonymous> (src/app/services/authentication.service.spec.ts:65:31)
7:06:10 PM: at TestBedViewEngine.push../node_modules/@angular/core/fesm5/testing.js.TestBedViewEngine.execute (node_modules/@angular/core/fesm5/testing.js:1822:1)
7:06:10 PM: at UserContext.<anonymous> (node_modules/@angular/core/fesm5/testing.js:1991:29)
7:06:10 PM: at ZoneDelegate../node_modules/zone.js/dist/zone.js.ZoneDelegate.invoke (node_modules/zone.js/dist/zone.js:391:1)
7:06:10 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 12 of 14 (1 FAILED) (0 secs / 3.689 secs)
7:06:10 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 13 of 14 (1 FAILED) (0 secs / 3.695 secs)
7:06:10 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 14 of 14 (1 FAILED) (0 secs / 3.856 secs)
7:06:10 PM: HeadlessChrome 74.0.3723 (Linux 0.0.0): Executed 14 of 14 (1 FAILED) (3.883 secs / 3.856 secs)
7:06:10 PM: TOTAL: 1 FAILED, 13 SUCCESS
7:06:10 PM: TOTAL: 1 FAILED, 13 SUCCESS
7:06:10 PM: npm
7:06:10 PM: ERR! code ELIFECYCLE
7:06:10 PM: npm
7:06:10 PM: ERR! errno 1
7:06:10 PM: npm ERR!
7:06:10 PM: beukes-bunch-health-tracker@0.0.0 test-headless: `ng test --watch=false --browsers=ChromeHeadless`
7:06:10 PM: npm ERR!
7:06:10 PM: Exit status 1
7:06:10 PM: npm
7:06:10 PM: ERR!
7:06:10 PM: npm
7:06:10 PM: failed during stage 'building site': Build script returned non-zero exit code: 1
7:06:10 PM: ERR! Failed at the beukes-bunch-health-tracker@0.0.0 test-headless script.
7:06:10 PM: npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
7:06:10 PM: npm
7:06:10 PM: ERR! A complete log of this run can be found in:
7:06:10 PM: npm
7:06:10 PM: Shutting down logging, 17 messages pending

Please leave a comment on how to enhance my ball of mud :)

Use it…don’t use it :)

Kudos to:
Todd Palmer - Angular Testing with Headless Chrome
Alain Chautard - How-to Running Angular tests on continuous integration servers

When Your Tests Are at the Mercy of Your Database

(Tech: C#, Microsoft SQL)

Recently I was investigating a bug which kept me, and a few of my mates, occupied way longer than we are willing to admit :)

One summer’s morning

It started on a fresh summer’s morning…

After I’ve completed my code changes, like a very good software citizen, I ran the project’s unit tests.
They were all green and I was riding the wave of success, nothing could stand in my way today…I’m flying through my work.
Last formality is to create a Pull Request and wait for the build server to succeed before reviewers will approve my changes.

Life…is…easy, everything is in harmony.

Until I received an email from the build server - Failing tests.
Wait…what?!?!?
They were all green locally, surely this can’t be right.
I kicked off another build, now they pass.
Kicked another build, now they fail.
Kicked off yet another build, failed again.
This carried on a few more times with the hope I’ll find a pattern, no luck.

It was very strange as my changes did not relate to the failing tests.

I then realized it was one of those “fun” intermittent failing tests.

And so my investigation was born…

Investigation

I set my acceptance criteria to 5 successful consecutive builds.

I had two failing tests.
In short, the tests set a property and then saved that property to the database.

1
2
3
4
5
6
7
public void ThenInvoiceCommentIsStored()
{
var invoices = GetInvoices(); //retrieving actual database data

//assert
invoices.Lines[0].Comment.ShouldBe("Hello World"); //sometimes fails here
}

The verification failed when that property was expected to have a string value but the actual value was null.

Example:

1
2
3
4
invoices.Lines[0].Comment should be 
"Hello World"
but was
null

Usually when I have intermittent failing tests, past experience taught me dates/times/timezone are involved.
In my case, none of those were present.

I’ve checked server build plans, added debugging messages, second guessing frameworks we used (long shot, I know), adding delays to check race conditions and a few other ridiculous ideas not worth mentioning.

None of my plans worked.

The investigation was a very slow and painful process as every code change took 5 minutes to build.

Die bug, die!

I decided to turn my focus back to the origin of the problem, the failing tests.
I zoomed in on the one particular test - ThenInvoiceCommentIsStored.

By looking at the test, the intent is to assert the first element in the list.
I’ve added a breakpoint and discovered there were actually two items in the Lines list.

Hypothesis: Is it possible we updated the data correctly but the list is only in the wrong order?

I changed the test to find the Line item by ID (filter), rather than the first element in the Lines list.

1
2
3
4
5
6
7
8
9
public void ThenInvoiceCommentIsStored()
{
var invoices = GetInvoices(); //retrieving actual database data

//assert
//_expectedLineID was set before the actual assertion eg: 88
var specificLine = invoices.Lines.Single(x => x.Id == _expectedLineID);
specificLine.Comment.ShouldBe("Hello World");
}

BOOM, the beast was slain…5 successful consecutive builds!

Solution Explained

The actual problem was a mistake in the test.

The test indicated we expect the first element in the list to change.
That holds true, so long as there is only one item in the list.
As soon as we have two items in the list, then we have to investigate sort order.

The extra data silently changed the test.

Because we query the data directly from the database, with no sort order, the database will return the records based on its default sort order.

Our tests were at the mercy of the database sort order.

Usually the database’s default sort order is based on a table’s Primary Key.

In our case, Lines.ID was the Primary Key and was of Data Type uniqueidentifier (GUID).

1
2
3
4
5
6
7
8
CREATE TABLE [dbo].[Lines](
[ID] [uniqueidentifier] NOT NULL,
[Comments] [nvarchar](max) NULL,
CONSTRAINT [PK_Lines] PRIMARY KEY CLUSTERED
(
[ID] ASC
)
)

Example: Two Line records were added in exactly this order but with only one digit difference:

488d6b1a-898e-4a13-acb3-a9e500d66962 (sample only, GUIDS are unique)
288d6b1a-898e-4a13-acb3-a9e500d66962

On a query of the data, even though the data input was done with 4xxxxxxx-xxxx-xxxx... first, 2xxxxxxx-xxxx-xxxx... will always show up first because of the Ascending Primary Key sort order [ID] ASC.

Query:

1
SELECT ID FROM Lines

Result:

1
2
3
ID
288d6b1a-898e-4a13-acb3-a9e500d66962
488d6b1a-898e-4a13-acb3-a9e500d66962

This had a devastating impact on the server unit tests as we had intermittent test failures.
In some cases we were lucky and the GUID order worked out well for asserting the first element in the Lines list, other times the updated Line item was somewhere in the list.

What about the green local tests?

I’m still unclear as to why they passed.

The only difference being that on the server, the database is created every time.
Locally, the database is created only the first time you run the unit tests.

I decided to let this one go as the change fixed the server build and locally the tests still passed.

My initial mission was completed.

What I have learned?

  1. Write your tests with more intent.
    Your test must speak to future you about what you were trying to test.
    When you look back at code you’ve written 3 months ago, you will thank the past you :)

  2. Perseverance - failure was not an option.
    The longer the problem dragged out, the more drive I had to fix the bug.
    Once you’ve invested so much time in an issue, the victory is so much sweeter.

Use it…don’t use it :)