Category Archives: Software Development

Using defineParameterType() with multiple regexes in cucumber-js

In a previous post I showed how to automatically replace parameter values before they get passed to your step definition code.

Now lets say you want that replacing/parsing/transformation for single- and double-quoted string like this in your feature file

When I print "Hello world!"
When I print 'its a beautiful day!'

Well…you’re in luck! The defineParameterType() method allows you to pass an array of regexes. We can use that to support both single- and double quoted strings with the same transformation function.

There’s a big gotcha here though. The the docs say this about the transformation function

A function or method that transforms the match from the regexp. Must have arity 1 if the regexp doesn’t have any capture groups. Otherwise the arity must match the number of capture groups in regexp.

In other words, when you use an array of regexes or if your regex has multiple capture groups, the function must have the same number of parameters as the total number of capture groups of all regexes in the array.

When cucumber calls the function, each parameter contains result from the corresponding capture group in the regex/array. If a regex does not match then cucumber passes an undefined value to the corresponding parameter number of the transform function So you’ll have to check each element if its undefined/null or not before using it.

defineParameterType({
    regexp: [/'([^']*)'/, /"([^"]*)"/],
    transformer: function (singleQ, doubleQ) {
        return singleQ ? replacePlaceholders(singleQ) : replacePlaceholders(doubleQ)
    },
    name: "a_string_with_replaced_placeholders"
});

Automatic type conversion of parameters in SpecFlow

In a previous post I showed that SpecFlow can change values of parameters. This mechanism is not just limited to transforming content of string values. It can also convert the literal string in your feature file to some complex object for your step-definition code.

Let say in your feature file you want to write steps like this:

Scenario: MyScenario
    When I print this list 'A,B,C,D'

Then you might be tempted to convert the string 'A,B,C,D' to a list like this

[Binding]
public class MyBindings
{
    [StepDefinition(@"I print this list '([^']*)'") ]
    public void PrintList(string input)
    {
        var items = input.Split(',')
        foreach(var item in items)
        {
            Console.WriteLine(item)
        }
    }
}

Don’t do this…it causes the same problems as mentioned in the previous post.

Instead SpecFlow’s Step Argument Conversion lets us simplify our step definition code to this:

[Binding]
public class MyBindings
{
    [StepDefinition(@"I print this list '([^']*)'") ]
    public void PrintList(IEnumerable<string> items) 
    {
        foreach(var item in items)
        {
            Console.WriteLine(item)
        }
    }
   
    [StepArgumentTransformation]
    public IEnumerable<string> TransformStringToList(string input)
    {
        return input.Split(',');
    }
}

Automatically replacing/transforming input parameters in cucumber-js

Most implementations of cucumber provide a mechanism for changing literal text in the feature file to values or objects your step definition code can use. This is known as step definition or step argument transforms. Here’s how this works in cucumber-js.

Assume we have this scenario:

Scenario: Test
    When I print 'Welcome {myname}'
    And I print 'Today is {todays_date}'

And we have this step-definition.

defineStep("I print {mystring}", async function (this: OurWorld, x: string) {
    console.log(x)
});

Notice the use of {mystring} in the Cucumber expression

We can use defineParameterType() to automatically replace all placeholders.

defineParameterType({
    regexp: /'([^']*)'/,
    transformer: function (s) {
        return s
            .replace('{todays_date}', new Date().toDateString())
            .replace('{myname}', 'Gerben')
    },
    name: "mystring",
    useForSnippets: false
});

You can even use this to for objects like so:

defineParameterType({
    name: 'color',
    regexp: /red|blue|yellow/,
    transformer: s => new Color(s)
})

defineStep("I fill the canvas with the color {color}", async function (this: OurWorld, x: Color) {
    // x is an object of type Color
});

When I fill the canvas with the color red

Containerising the development environment

One of the nice things about docker is that we can use all kinds of software without cluttering up our local machine. I really like the ability to have the development environment running in a container. Here is an example where we:

  • Get a Node.js development environment with all required tools and packages
  • Allow remote debugging of the app in the container
  • See code changes immediately reflected inside the container

The dockerfile below gives us a container with all required tools and packages for a Node.js app. In this example we assume the ‘.’ directory contains the files needed to run the app.

FROM node:9

WORKDIR /code

RUN npm install -g nodemon

COPY package.json /code/package.json
RUN npm install && npm ls
RUN mv /code/node_modules /node_modules
COPY . /code

CMD ["npm", "start"]

That’s nice, but how does this provide remote debugging? and how do code changes propagate to a running container?

Two very normal aspects of docker achieve this. Firstly docker-compose.yml overrules the CMD ["npm", "start"] statement to start nodemon with the --inspect=0.0.0.5858 flag. That starts the app with the debugger listening on all of the machines IP addresses. We expose port 5858 to allow remote debuggers to connect to the app in the container.

Secondly, the compose file contains a volume mapping that overrules the /code folder in the container and points it to the directory on the local machine where you edit the code. Combined with the --watch flag nodemon sees any changes you make to the code and restarts the app in the container with the latest code changes.

Note: If you are running docker on Windows of the code is stored on some network share, then you must use the --legacy-watch flag instead of --watch

The docker-compose.yml file:

version: "2"

services:
  app:
    build: .
    command: nodemon --inspect=0.0.0.0:5858 --watch
    volumes:
      - ./:/code
    ports:
      - "5858:5858"

Here’s a launch.json for Visual Studio Code to attach to the container.

{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Attach",
            "type": "node",
            "request": "attach",
            "port": 5858,
            "address": "localhost",
            "restart": true,
            "sourceMaps": false,
            "outDir": null,
            "localRoot": "${workspaceRoot}",
            "remoteRoot": "/code"
        }
    ]
}

Understanding the LINQ nested grouping example

Here’s an explanation of how the default example for LINQ nested grouping actually works. The usual example for nested grouping looks like this:

from student in Students
group student by student.Faculty into Faculty
from dbtgroup in
(
    from student in Faculty
    group student by student.DebtCategory
)
group dbtgroup by Faculty.Key;

The objective of this statement is to first group-by students into faculties and then in each faculty create subgroupings of students by their DebtCategory.

So how does this actually work and whats the equivalent method/lamba syntax? The first step is to groups each student into their faculty. Assume we have the following data

public class Student
{
   public string Name { get; set; }
   public string Faculty { get; set; }
   public int DebtCategory { get; set; }
}

IList<Student> Students = new List<Student>();
Students.Add(new Student { Name = "John" , Faculty = "IT"     , DebtCategory = 2 });
Students.Add(new Student { Name = "Jane" , Faculty = "IT"     , DebtCategory = 2 });
Students.Add(new Student { Name = "Jesse", Faculty = "Finance", DebtCategory = 2 });
Students.Add(new Student { Name = "Linda", Faculty = "Finance", DebtCategory = 1 });

The following query groups each student into a faculty

var query1 = from student in Students
group student by student.Faculty into Faculty
select Faculty;

//The Method syntax for the above query is:
var query1Method = Students
.GroupBy(student => student.Faculty)
.Select ( Faculty => Faculty);

//This gives us the following IGrouping<string, Student> as result
//
// [0]
//    Key   :  IT
//    Values: 
//          [0] John (IT) (2)
//          [1] Jane (IT) (2)
//
// [1]
//    Key   : Finance
//    Values:
//          [0] Jesse (Finance) (2)
//          [1] Linda (Finance) (1)

The next step is to add another level of grouping:

var query2 = from student in Students
group student by student.Faculty into Faculty
from dbtgroup in
(
    from student in Faculty
    group student by student.DebtCategory
)
select dbtgroup;
//This gives us the following IGrouping<int, Student> as result
//[0]
//  Key   : 2
//  Values:
//        [0] John (IT) (2)
//        [1] Jane (IT) (2)
//
//[1]
//  Key   : 2
//  Values:
//        [0] Jesse (Finance) (2)
//
//[2]
//  Key   : 1
//  Values:
//        [0] Linda (Finance) (1)

// The following is the literal translation of the above Comprehension syntax into method syntax. We're ignoring this as explained below
//	var query2Method = Students
//		.GroupBy(student => student.Faculty)
//		.SelectMany(  Faculty =>Faculty.GroupBy(student => student.DebtCategory)
//					, (Faculty, dbtgroup) => dbtgroup);
	
//The final complete query ends with"group dbtgroup by Faculty.Key;" 
// this statement causes the compiler to see that you're refering to the Faculty object from the select many, so instead of 
// "(Faculty, dbtgroup) => dbtgroup" it emits a slightly different projection "(Faculty, dbtgroup) => new {Faculty, dbtgroup}
//structure
var query2Method = Students
.GroupBy(student => student.Faculty)
.SelectMany( Faculty =>Faculty.GroupBy(student => student.DebtCategory)
	 , (Faculty, dbtgroup) => new {Faculty, dbtgroup});

Query2 is close to our desired output, however the grouping is the wrong way around. So the final step is:

var query3 = from student in Students
group student by student.Faculty into Faculty
from dbtgroup in
    (
    from student in Faculty
    group student by student.DebtCategory
    )
group dbtgroup by Faculty.Key;

//The method/lambda syntax is:
var query3Method = Students
.GroupBy(student => student.Faculty)
.SelectMany (
	Faculties => Faculties.GroupBy (student => student.DebtCategory)
	, (Faculty, dbtgroup) => 
		new  
		{
			Faculty = Faculty, 
			dbtgroup = dbtgroup
		} )
.GroupBy( item => item.Faculty.Key, item => item.dbtgroup );

//This gives us the following groups as result
//[0]
//  Key   : IT
//  Values:
//        [0] Key   : 2
//            Values:
//                  [0] John (IT) (2)
//                  [1] Jane (IT) (2)
//[1]
//  Key   : Finance
//  Values:
//        [0] Key   : 2
//            Values:
//                  [0] Jesse (Finance) (2)
//        [1] Key   : 1
//            Values:
//                    [0] Linda (Finance) (1)

Entity Framework Code First migrations and the [StringLength] annotation

Recently I needed to change my model so that a field would be checked for uniqueness. I eagerly added the [StringLength(3)] and [Index(IsUnique = true)] annotations to the model and ran Add-Migration and Update-Database. Close, but no cigar unfortunately. Update-Database kept throwing the following error: System.Data.SqlClient.SqlException (0x80131904): Column 'IsoCode' in table 'dbo.CurrencyModels' is of a type that is invalid for use as a key column in an index.

This is due to the fact that the generated code migration, was only applying the index and not the length restriction. You can fix this directly in the Up() and Down() methods of the code migration by using AlterColumn() as follows:

    public partial class UniqueCurrency : DbMigration
    {
        public override void Up()
        {
            AlterColumn("dbo.CurrencyModels", "IsoCode", c => c.String(maxLength: 3));
            CreateIndex("dbo.CurrencyModels", "IsoCode", unique: true);
        }
        
        public override void Down()
        {
            AlterColumn("dbo.CurrencyModels", "IsoCode", c => c.String(maxLength: null));
            DropIndex("dbo.CurrencyModels", new[] { "IsoCode" });
        }
    }

Dynamic predicates in C# using PredicateBuilder

One of the challenges I frequently encounter, is having to translate the arbitrary criteria in a testcase to LINQ selection predicates. Take the following very simple example testcase:

Feature: ModifyingInvoices
	In order to demonstrate the usefulness of PredicateBuilder, 
        we will show how to verify if a C# collection contains a
        record that matches multiple criteria that are only known 
        at run time

Scenario: ModifyDescription
	When I create an invoice with number '123' for '20' euro
	Then The systems invoice store must look like:
	| Number | Amount | DescriptionPresent | Desciption |
	| 123    | 20     | False              |            |
	When I change the description in invoice '123' to 'Testing!'
	Then The systems invoice store must look like:
	| Number | Amount | DescriptionPresent | Description |
	| 123    | 20     | True               | Testing!    |

In this very small example, you already see that the C# code will need to determine at run-time IF an invoice exists AND MAYBE what the contents of its description should be. If an invoice has many fields. this will become exponentially complex in the code. If your criteria requires an OR construct then that’s even more complex. The solution is to use a PredicateBuilder that builds a dynamic predicate

First install the NuGet Package LINQKit (see PredicateBuilder website) Then add the directive using LinqKit; to your code. Now create the code that queries your data like follows:

        [Then(@"The systems invoice store must look like:")]
        public void ThenTheSystemsInvoiceStoreMustLookLike(Table table)
        {
            var rows = table.CreateSet<InvoiceTest>();

            foreach(InvoiceTest test in rows)
            {
                var MyPredicate = LinqKit.PredicateBuilder.True<Invoice>();
                MyPredicate = MyPredicate.And(invoice => invoice.Number == test.Number);
                MyPredicate = MyPredicate.And(invoice => invoice.Amount == test.Amount);

                if (test.DescriptionPresent)
                {
                    MyPredicate = MyPredicate.And(item => item.Desciption.Equals(test.Description));
                }

                //Test that our datastore contains an invoice that matches the predicate from the testcase
                IQueryable<Invoice> Matches = this.Invoices.AsQueryable().Where<Invoice>(MyPredicate);
                Assert.AreEqual(1, Matches.Count());
            }
        }

What to do when your JQuery-ui dialog is hidden behind other elements

If you see your JQuery-ui dialog being hidden by other elements in the webpage, then you need to increase its z-index. I recently ran into the case where the JQXgrid widget was using very high z-indeces outside of my control.

Here’s the code:

ZIndexer = function () {
    var self      = this;
    this.Elements = [];
    
    this.Add = function (JQuerySelector) {
        var DomElementArray = $(JQuerySelector)
        $.each(DomElementArray, function (i, element) { self.Elements.push(element) })
        return this;
    }

    this.GetNextFreeZIndex = function () {
        var zIndeces = $(this.Elements).sort(function descending(a, b) {
            var bZIndex = $(b).zIndex()
            var aZIndex = $(a).zIndex()
            return bZIndex - aZIndex
        })

        return $(zIndeces[0]).zIndex() + 1;
    }

}

//My grid is in a div with id jqxgrid. All of its child elements need
//to be considered when figuring out the next available ZIndex
var foo = new ZIndexer().Add(&quot;#jqxgrid *&quot;);

//Set the z-index of the jquery-ui dialog and its overlay to the highest available
$('.ui-widget-overlay').css('z-index',foo.GetNextFreeZIndex());
$('.ui-dialog').css('z-index',foo.GetNextFreeZIndex() + 1);
Knockout.js logo

Performance of JQXGrid combined with knockout

The other day I noticed poor performance of a JQXGrid when combined with knockout. I had an ko.ObservableArray() with objects. Each object contains only 3 ko.observable(). I was using JQXGrid’s selection check-box on each row. Event-handlers were established to react to changes in the check-box and set one of the ko.observable() in the the corresponding object in the array.

On my page I was displaying the following:


  1. The JQXgrid

  2. A HTML table using the knockout foreach binding. This table displayed a checkbox for 1 of the observables and static text for the other one

  3. A string representation of the ViewModel using data-bind="text: JSON.stringify(ko.toJS(MyViewModel), null, 4)"

When I increased the number of object in the array, just modifying one check-box caused the UI to slowdown to unacceptable levels.

Items in array Time to complete one click (ms) Time to select all (ms)
25 1.408,136 22.233,092
50 2.156,774 77.999,535
100 5.871,934 473.352,168
200 23.124,779
400 115.075,14
800 707.176,804

When we graph this, you can see a clear O(n^2) performance bottleneck:
A graph showing the exponential increase in runtime

I wanted to change the grid’s source property to use a dataAdapter, However, that did render the table, but each colunm had no value. This is detailed in link where they say:

 March 30, 2012 at 12:53 pm	

It is currently not possible to bind the grid datafields to observable properties. Could you send us a sample view model which demonstrates the required functionality, so we can create a new work item and consider implementing the functionality in the future versions? Looking forward to your reply.

Best Wishes,
Peter