Has it been almost three years since my last blog post 🤦‍♂️?

Up until the very beginning of 2016 i was mostly doing .Net Development using frameworks such as Microsoft WebAPI, Microsoft ASP.NET MVC. But years before that, an itch developed.

Around 2012 my interest grew into dynamic languages and i started to experiment and play with Ruby. If i look back, i think it was mostly the fact that it felt so weird to lay a typed contract over a dynamic format like JSON.

On the other hand our codebases started to grow on the client side. We were using frameworks like JQuery and JQuery.Ui to make things more ux friendly. In the end it looked like we had to create a controller in Javascript to keep it well structured.

In the beginning of 2016 i got the opportunity to become lead of a new project for a client. The choice was made to build it entirely using Javascript, by using Angular.JS for the frontend and NODE.JS for backend.

I never looked back, now my preferred stack is React.JS and Node.JS on the back-end and decided to pick up blogging again!

Our current project, has the ability to generate and combine EPL labels from different sources. One of the scenarios required to embed an image (png) into a generated label.

GW - Direct Graphic Write command

Description Use this command to load binary graphic data directly into the Image Buffer memory for immediate printing. The printer does not store graphic data sent directly to the image buffer.

The graphic data is lost when the image has finished printing, power is removed or the printer is reset. Commands that size (Q and q) or clear (N and M) the image buffer will also remove graphic image data.

Syntax GWp1,p2,p3,p4,DATA

  • p1 Horizontal start position (X) in dots.
  • p2 Vertical start position (Y) in dots.
  • p3 Width of graphic in bytes. Eight (8) dots = one (1) byte of data.
  • p4 Length of graphic in dots (or print lines)
  • DATA Raw binary data without graphic file formatting. Data must be in bytes. Multiply the width in bytes (p3) by the number of print lines (p4) for the total amount of graphic data. The printer automatically calculates the exact size of the data block based upon this formula.

The challenge

The challenge here was to figure out what they exactly mean by Raw binary data without graphic file formatting!

So after some googling a colleague of mine found the following working solution on CodeProject and adapted it to our needs.

using System;
using System.Drawing;
using System.IO;
using System.Text;

private static string SendImageToPrinter(int top, int left, Bitmap bitmap)
  using (MemoryStream ms = new MemoryStream())
  using (BinaryWriter bw = new BinaryWriter(ms, Encoding.ASCII))
    //we set p3 parameter, remember it is Width of Graphic in bytes,
    //so we divide the width of image and round up of it
    int P3 = (int)Math.Ceiling((double)bitmap.Width / 8);
    ("GW{0},{1},{2},{3},", top, left, P3, bitmap.Height)));
    //the width of matrix is rounded up multi of 8
    int canvasWidth = P3 * 8;
    //Now we convert image into 2 dimension binary matrix by 2 for loops below,
    //in the range of image, we get colour of pixel of image,
    //calculate the luminance in order to set value of 1 or 0
    //otherwise we set value to 1
    //Because P3 is set to byte (8 bits), so we gather 8 dots of this matrix,
    //convert into a byte then write it to memory by using shift left operator <<
    //e,g 1 << 7  ---> 10000000
    //    1 << 6  ---> 01000000
    //    1 << 3  ---> 00001000
    for (int y = 0; y < bitmap.Height; ++y)     //loop from top to bottom
      for (int x = 0; x < canvasWidth; )       //from left to right
        byte abyte = 0;
        for (int b = 0; b < 8; ++b, ++x)     //get 8 bits together and write to memory
          int dot = 1;                     //set 1 for white,0 for black
          //pixel still in width of bitmap,
          //check luminance for white or black, out of bitmap set to white
          if (x < bitmap.Width)
            Color color = bitmap.GetPixel(x, y);
            int luminance = (int)((color.R * 0.3) + (color.G * 0.59) + (color.B * 0.11));
            dot = luminance > 127 ? 1 : 0;
          abyte |= (byte)(dot << (7 - b)); //shift left,
          //then OR together to get 8 bits into a byte
    //reset memory
    ms.Position = 0;
    //get encoding, I have no idea why encode page of 1252 works and fails for others
    return Encoding.GetEncoding(1252).GetString(ms.ToArray());

The problem

After analyzing the performance of our code under high load, we came to the conclusion that this extraction was actually really slowing the entire process down. It took around 1.5 seconds on average for a 10KB image (caused by usage of GetPixel).

What does it do?

  • Extracts raw binary data
  • Transforms the image into black and white
  • Adds a white overlay over parts that actually are out of bounds

What we needed?

  • Extract the raw binary data

As we already had an image within bounds which was already black and white (greyscale) in format1bppindexed format.

Extract raw binary fast (avg 84ms on 10KB file)

using System.Drawing;
using System.Drawing.Imaging;
using System.Runtime.InteropServices;

private static byte[] GetRawPixelData(Image image)
  using (var bitmap = (Bitmap) image)
    var bitmapData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format1bppIndexed);
      var length = bitmapData.Stride*bitmapData.Height;
      byte[] bytes = new byte[length];

      // Copy bitmap to byte[]
      Marshal.Copy(bitmapData.Scan0, bytes, 0, length);

      return bytes;
      // Make sure we unlock no matter what

Conversion to black and white (greyscale)

using System.Drawing;
using System.Drawing.Imaging;

public void ConvertToGrayscale(Image image)
  using (Bitmap bitmap = new Bitmap(image.Width, image.Height))
    using (var g = Graphics.FromImage(bitmap))
      //create the grayscale ColorMatrix
      var colorMatrix = new ColorMatrix(
              new[] {.3f, .3f, .3f, 0, 0},
              new[] {.59f, .59f, .59f, 0, 0},
              new[] {.11f, .11f, .11f, 0, 0},
              new float[] {0, 0, 0, 1, 0},
              new float[] {0, 0, 0, 0, 1}

      //create some image attributes
      ImageAttributes attributes = new ImageAttributes();

      //set the color matrix attribute

      //draw the original image on the new image
      //using the grayscale color matrix
      g.DrawImage(image, new Rectangle(0, 0, image.Width, image.Height),
          0, 0, image.Width, image.Height, GraphicsUnit.Pixel, attributes);

We use mocks, stubs or fakes when writing tests, mostly using a mocking framework of choice (Moq, RhinoMocks, NSubstitute).

The reason why we do this is to isolate our System under test (SUT) from it’s dependencies.

We want to avoid repeating ourselves and test the same logical code block (method, property, class, condition, …) multiple times.

What’s wrong with it?

In fact there is nothing wrong with it, apart from a misunderstanding of ‘System under test’.

Do you really feel that the class you are testing is a system on it’s own?

Unless you are developing a framework, the answer in most cases should be no.

What do we need to make mocking possible?

Most of the mocking frameworks need a way to intercept calls, either by marking the method (or property) as virtual or by introducing an interface.

As virtual method’s (or properties) tend to be a bit on the annoying side, we just introduce interfaces everywhere. Named equally to it’s implementation class, prefixed with ‘I’ as good convention abiding citizens do.

By doing this we just ignore the reason of existence of an interface. An interface should exist if there are more implementations available or you want to open it up for extension!

A good practice before introducing an interface is to find a meaningful name for it without the ‘I’ prefix. If the best you can come up with is the classname itself you might not need it.

Does it hurt?

If abused, it can block you from making changes to the code base. Which is in direct contrast with the reason you added them in the first place.

It can ‘glue’ your classes together in a way that refactoring parts of the system can only result in changing a lot of tests or even worse, throwing them out of the window. Even if the outcome of the SUT is the same as before.

Tests are supposed to help you, not work against you!

Ideal justifications for adapting a test?


  1. You made the wrong assumption in the first place
  2. The requirement changed


  • Object composition changes
  • Refactoring

Sometimes we, developers, tend to have an itch. When i saw that there was a new kid in town (AppVeyor) that does free open source continuous integration, i just had to scratch.

As i resend xml configuration files, i love the simplicity of yaml and the older ini file formats.

But if you would like to read ini files in .net you have to do a dll import:

[DllImport("KERNEL32.DLL",   EntryPoint = "GetPrivateProfileStringW",
  CharSet=CharSet.Unicode, ExactSpelling=true,
private static extern int GetPrivateProfileString(
  string lpAppName,
  string lpKeyName,
  string lpDefault,
  string lpReturnString,
  int nSize,
  string lpFilename);

So i thought of creating a simple relaxed ini parsing library called ZenIni which you can obtain through Nuget.

Getting started

Install-Package zenini


Everything starts with constructing the provider, which you could register as a static instance in your DI container of choice:

using System;
using Zenini;
using Zenini.Readers;

namespace ConsoleApplication5
  internal class Program
    private static void Main(string[] args)
      var provider = new IniSettingsProvider(
        new DefaultSettingsReader(StringComparer.OrdinalIgnoreCase));

The DefaultSettingsReader allows you to specify how strings should be compared so you can choose whether or not to ignore case.


The IniSettings class is the in memory representation of your ini file, which consists out of sections with or without nested key/value pairs.

IIniSettings settings = provider.FromFile(@"c:\Temp\DefaultSettings.ini");


Getting a section is as easy as

ISection firstSection = settings["SectionOne"];

Even if the original file did not contain the section if will never return null, it will return the static instance Section.Empty. This to relieve you from checking for null when you need to access a value.


Given the following ini file


To get the status you just have to do:

var status = settings["SectionOne"].GetValue("Status");


There are some extension methods to help you with common used types like booleans and integer values. Given the same ini file, to get the age setting you call:

int? age = settings["SectionOne"].GetValueAsInt("Age");

It returns a nullable, so if your ini file does not contain the setting you can just add a default like this:

int age = settings["SectionOne"].GetValueAsInt("Age") ?? 25;

Just give it a spin, it’s small and very easy. The specification is documented on the wiki.

While .NET framework is a highly compatible, in-place update to the Microsoft .NET Framework 4, there are some rough edges that you need to be aware of.

We needed to target .NET 4.0 for a brand new application at a client. We had everything perfectly worked out, continuous integration build set up from day 1. Next day, a release build with an Inno Setup installer.

Later on, when our PO asked us for some screen shots, we showed him the release build on TeamCity, where he could just download the artifact and install the application.

He did what we asked him to do, he clicked next-next-finish on the installer and the application was supposed to open for the very first time on his machine, instead he got the following issue:

Could not load type ‘System.Runtime.CompilerServices.ExtensionAttribute’ from assembly ‘mscorlib, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089’.

At first a bit puzzled by the problem, the google-reflex took over and we started our investigation. Which quickly lead to a very helpfull article on StackOverflow and a deep dive blog post of Matt Wrock about il-merging.

Apparantly in .NET 4.5, Microsoft decided it was time to move/refactor a couple attributes from System.Core assembly to mscorlib. The types still exist in System.Core but with a TypeForwardedToAttribute to their new home in mscorlib.

I wanted to know how this was possible and how we could have missed it.

Looking at the CI build, a build warning indicated the issue:

warning MSB3644: The reference assemblies for framework “.NETFramework,Version=v4.0” were not found. To resolve this, install the SDK or Targeting Pack for this framework version or re-target your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.

Resolution: Either install .NET SDK (windows 7 sdk) or make sure your build server has the reference assemblies for .NET 4, which are located in:

“\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.0”.

To verify you can quickly open the assembly with ildasm and look at it’s manifest:

Missing reference assemblies
.custom instance void [mscorlib]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = ( 01 00 00 00 )
Correct reference assemblies
.custom instance void [System.Core]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = ( 01 00 00 00 )