What happened to me?

Has it been almost three years since my last blog post?

Up until the very beginning of 2016 i was mostly doing .Net Development using frameworks such as Microsoft WebAPI, Microsoft MVC. But years before that, an itch developed.

Around 2012 my interest grew into dynamic languages and i started to experiment and play with Ruby. If i look back, i think it was mostly the fact that it felt so weird to lay a typed contract over a dynamic format like JSON. On the other hand our codebases started to grow on the client side. We were using frameworks like JQuery and JQuery.Ui to make things more ux friendly. In the end it looked like we had to create a controller in Javascript to keep it well structured.

In the beginning of 2016 i got the opportunity to become lead of a new project for a client. The choice was made to build it entirely using Javascript, by using Angular.JS for the frontend and NODE.JS for backend.

I never looked back, now my preferred stack is React.JS and Node.JS on the back-end and decided to pick up blogging again!

How to embed an image into an EPL label

Our current project, has the ability to generate and combine EPL labels from different sources. One of the scenarios required to embed an image (png) into a generated label.

GW - Direct Graphic Write command

Description Use this command to load binary graphic data directly into the Image Buffer memory for immediate printing. The printer does not store graphic data sent directly to the image buffer.

The graphic data is lost when the image has finished printing, power is removed or the printer is reset. Commands that size (Q and q) or clear (N and M) the image buffer will also remove graphic image data.

Syntax GWp1,p2,p3,p4,DATA

  • p1 Horizontal start position (X) in dots.
  • p2 Vertical start position (Y) in dots.
  • p3 Width of graphic in bytes. Eight (8) dots = one (1) byte of data.
  • p4 Length of graphic in dots (or print lines)
  • DATA Raw binary data without graphic file formatting. Data must be in bytes. Multiply the width in bytes (p3) by the number of print lines (p4) for the total amount of graphic data. The printer automatically calculates the exact size of the data block based upon this formula.

The challenge

The challenge here was to figure out what they exactly mean by Raw binary data without graphic file formatting!

So after some googling a colleague of mine found the following working solution on CodeProject and adapted it to our needs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
using System;
using System.Drawing;
using System.IO;
using System.Text;

private static string SendImageToPrinter(int top, int left, Bitmap bitmap)
{
using (MemoryStream ms = new MemoryStream())
using (BinaryWriter bw = new BinaryWriter(ms, Encoding.ASCII))
{
//we set p3 parameter, remember it is Width of Graphic in bytes,
//so we divide the width of image and round up of it
int P3 = (int)Math.Ceiling((double)bitmap.Width / 8);
bw.Write(Encoding.ASCII.GetBytes(string.Format
("GW{0},{1},{2},{3},", top, left, P3, bitmap.Height)));
//the width of matrix is rounded up multi of 8
int canvasWidth = P3 * 8;
//Now we convert image into 2 dimension binary matrix by 2 for loops below,
//in the range of image, we get colour of pixel of image,
//calculate the luminance in order to set value of 1 or 0
//otherwise we set value to 1
//Because P3 is set to byte (8 bits), so we gather 8 dots of this matrix,
//convert into a byte then write it to memory by using shift left operator <<
//e,g 1 << 7 ---> 10000000
// 1 << 6 ---> 01000000
// 1 << 3 ---> 00001000
for (int y = 0; y < bitmap.Height; ++y) //loop from top to bottom
{
for (int x = 0; x < canvasWidth; ) //from left to right
{
byte abyte = 0;
for (int b = 0; b < 8; ++b, ++x) //get 8 bits together and write to memory
{
int dot = 1; //set 1 for white,0 for black
//pixel still in width of bitmap,
//check luminance for white or black, out of bitmap set to white
if (x < bitmap.Width)
{
Color color = bitmap.GetPixel(x, y);
int luminance = (int)((color.R * 0.3) + (color.G * 0.59) + (color.B * 0.11));
dot = luminance > 127 ? 1 : 0;
}
abyte |= (byte)(dot << (7 - b)); //shift left,
//then OR together to get 8 bits into a byte
}
bw.Write(abyte);
}
}
bw.Write("\n");
bw.Flush();
//reset memory
ms.Position = 0;
//get encoding, I have no idea why encode page of 1252 works and fails for others
return Encoding.GetEncoding(1252).GetString(ms.ToArray());
}
}

The problem

After analyzing the performance of our code under high load, we came to the conclusion that this extraction was actually really slowing the entire process down. It took around 1.5 seconds on average for a 10KB image (caused by usage of GetPixel).

What does it do?

  • Extracts raw binary data
  • Transforms the image into black and white
  • Adds a white overlay over parts that actually are out of bounds

What we needed?

  • Extract the raw binary data

As we already had an image within bounds which was already black and white (greyscale) in format1bppindexed format.

Extract raw binary fast (avg 84ms on 10KB file)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
using System.Drawing;
using System.Drawing.Imaging;
using System.Runtime.InteropServices;

private static byte[] GetRawPixelData(Image image)
{
using (var bitmap = (Bitmap) image)
{
var bitmapData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format1bppIndexed);
try
{
var length = bitmapData.Stride*bitmapData.Height;
byte[] bytes = new byte[length];

// Copy bitmap to byte[]
Marshal.Copy(bitmapData.Scan0, bytes, 0, length);

return bytes;
}
finally
{
// Make sure we unlock no matter what
bitmap.UnlockBits(bitmapData);
}
}
}

Conversion to black and white (greyscale)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
using System.Drawing;
using System.Drawing.Imaging;

public void ConvertToGrayscale(Image image)
{
using (Bitmap bitmap = new Bitmap(image.Width, image.Height))
{
using (var g = Graphics.FromImage(bitmap))
{
//create the grayscale ColorMatrix
var colorMatrix = new ColorMatrix(
new[]
{
new[] {.3f, .3f, .3f, 0, 0},
new[] {.59f, .59f, .59f, 0, 0},
new[] {.11f, .11f, .11f, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {0, 0, 0, 0, 1}
});

//create some image attributes
ImageAttributes attributes = new ImageAttributes();

//set the color matrix attribute
attributes.SetColorMatrix(colorMatrix);

//draw the original image on the new image
//using the grayscale color matrix
g.DrawImage(image, new Rectangle(0, 0, image.Width, image.Height),
0, 0, image.Width, image.Height, GraphicsUnit.Pixel, attributes);
}
}
}

Don't mock yourself

We use mocks, stubs or fakes when writing tests, mostly using a mocking framework of choice (Moq, RhinoMocks, NSubstitute).

The reason why we do this is to isolate our System under test (SUT) from it’s dependencies.

We want to avoid repeating ourselves and test the same logical code block (method, property, class, condition, …) multiple times.

What’s wrong with it?

In fact there is nothing wrong with it, apart from a misunderstanding of ‘System under test’.

Do you really feel that the class you are testing is a system on it’s own?

Unless you are developing a framework, the answer in most cases should be no.

What do we need to make mocking possible?

Most of the mocking frameworks need a way to intercept calls, either by marking the method (or property) as virtual or by introducing an interface.

As virtual method’s (or properties) tend to be a bit on the annoying side, we just introduce interfaces everywhere. Named equally to it’s implementation class, prefixed with ‘I’ as good convention abiding citizens do.

By doing this we just ignore the reason of existence of an interface. An interface should exist if there are more implementations available or you want to open it up for extension!

A good practice before introducing an interface is to find a meaningful name for it without the ‘I’ prefix. If the best you can come up with is the classname itself you might not need it.

Does it hurt?

If abused, it can block you from making changes to the code base. Which is in direct contrast with the reason you added them in the first place.

It can ‘glue’ your classes together in a way that refactoring parts of the system can only result in changing a lot of tests or even worse, throwing them out of the window. Even if the outcome of the SUT is the same as before.

Tests are supposed to help you, not work against you!

Ideal justifications for adapting a test?

Good

  1. You made the wrong assumption in the first place
  2. The requirement changed

Bad

  • Object composition changes
  • Refactoring

Zenini not just any vegetable

Sometimes we, developers, tend to have an itch. When i saw that there was a new kid in town (AppVeyor) that does free open source continuous integration, i just had to scratch.

As i resend xml configuration files, i love the simplicity of yaml and the older ini file formats.

But if you would like to read ini files in .net you have to do a dll import:

1
2
3
4
5
6
7
8
9
10
11
[DllImport("KERNEL32.DLL",   EntryPoint = "GetPrivateProfileStringW",
SetLastError=true,
CharSet=CharSet.Unicode, ExactSpelling=true,
CallingConvention=CallingConvention.StdCall)]
private static extern int GetPrivateProfileString(
string lpAppName,
string lpKeyName,
string lpDefault,
string lpReturnString,
int nSize,
string lpFilename);

So i thought of creating a simple relaxed ini parsing library called ZenIni which you can obtain through Nuget.

Getting started

1
Install-Package zenini

Provider

Everything starts with constructing the provider, which you could register as a static instance in your DI container of choice:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using System;
using Zenini;
using Zenini.Readers;

namespace ConsoleApplication5
{
internal class Program
{
private static void Main(string[] args)
{
var provider = new IniSettingsProvider(
new DefaultSettingsReader(StringComparer.OrdinalIgnoreCase));
}
}
}

The DefaultSettingsReader allows you to specify how strings should be compared so you can choose whether or not to ignore case.

IIniSettings

The IniSettings class is the in memory representation of your ini file, which consists out of sections with or without nested key/value pairs.

1
IIniSettings settings = provider.FromFile(@"c:\Temp\DefaultSettings.ini");

Sections

Getting a section is as easy as

1
ISection firstSection = settings["SectionOne"];

Even if the original file did not contain the section if will never return null, it will return the static instance Section.Empty. This to relieve you from checking for null when you need to access a value.

Values

Given the following ini file

1
2
3
4
5
6
[SectionOne]
Status=Single
Name=Derek
Value=Yes
Age=30
Single=True

To get the status you just have to do:

1
var status = settings["SectionOne"].GetValue("Status");

Extensions

There are some extension methods to help you with common used types like booleans and integer values. Given the same ini file, to get the age setting you call:

1
int? age = settings["SectionOne"].GetValueAsInt("Age");

It returns a nullable, so if your ini file does not contain the setting you can just add a default like this:

1
int age = settings["SectionOne"].GetValueAsInt("Age") ?? 25;

Just give it a spin, it’s small and very easy. The specification is documented on the wiki.

Targeting .NET 4.0, what could possibly go wrong?

While .NET framework is a highly compatible, in-place update to the Microsoft .NET Framework 4, there are some rough edges that you need to be aware of.

We needed to target .NET 4.0 for a brand new application at a client. We had everything perfectly worked out, continuous integration build set up from day 1. Next day, a release build with an Inno Setup installer.

Later on, when our PO asked us for some screen shots, we showed him the release build on TeamCity, where he could just download the artifact and install the application.

He did what we asked him to do, he clicked next-next-finish on the installer and the application was supposed to open for the very first time on his machine, instead he got the following issue:

Could not load type ‘System.Runtime.CompilerServices.ExtensionAttribute’ from assembly ‘mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’.

At first a bit puzzled by the problem, the google-reflex took over and we started our investigation. Which quickly lead to a very helpfull article on StackOverflow and a deep dive blog post of Matt Wrock about il-merging.

Apparantly in .NET 4.5, Microsoft decided it was time to move/refactor a couple attributes from System.Core assembly to mscorlib. The types still exist in System.Core but with a TypeForwardedToAttribute to their new home in mscorlib.

I wanted to know how this was possible and how we could have missed it.

Looking at the CI build, a build warning indicated the issue:

warning MSB3644: The reference assemblies for framework “.NETFramework,Version=v4.0” were not found. To resolve this, install the SDK or Targeting Pack for this framework version or re-target your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.

Resolution: Either install .NET SDK (windows 7 sdk) or make sure your build server has the reference assemblies for .NET 4, which are located in:

“\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.0”.

To verify you can quickly open the assembly with ildasm and look at it’s manifest:

Missing reference assemblies
1
.custom instance void [mscorlib]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = ( 01 00 00 00 )
Correct reference assemblies
1
.custom instance void [System.Core]System.Runtime.CompilerServices.ExtensionAttribute::.ctor() = ( 01 00 00 00 )

Using git as subversion client

Almost a decade ago, a company I worked for, started using Subversion which was a great step forward coming from Visual Source Safe.

Today, the landscape is totally different, but on enterprise level not much has changed. Some of the companies use Team Foundation Server which is great if you like total integration, but it’s source control system is not that great.
Others remained with Subversion.

They might be thinking of going to Git, but most of them are afraid of the learning curve. As with all learning processes, you just have to start somewhere, just dive-in, step by step you’ll start learning Git.

Getting Started

As a fan of Chocolatey as machine package manager, just enter at command prompt

1
cinst git

Or download the installer for git (at the moment it’s version 1.9.4).

Checking out the repository

If your Subversion has the default trunk, branches, tags layout

1
git svn clone -s <repo_url> --prefix svn/

The -s switch is a shortcut for

1
git svn clone -T trunk --t tags --b branches <repo_url> --prefix svn/

This will checkout the trunk of the repository by default. Now suppose your repository already had a branch called Test:

1
2
3
4
> git branch -a
* master
remotes/svn/Test
remotes/svn/trunk

Advanced: Shallow clone

As the default approach above will crawl trough all svn revisions which can be very slow, if you are in an environment where they are using a single Svn repository with a lot of revisions for different projects, you could be looking at a few hours, days or even weeks (+ 150000 revisions). In that particular use case follow this approach:

Get first revision of directory in gigantic repository

1
svn log <repo_url>

The lowest/first revision number is what you are looking for, if you want full history in git on the project. If you just wanna checkout you can use the last revision number.

Checkout repository

1
git svn clone -s <repo_url> --prefix svn/ -r <first-revision>

This will initialize an empty git repository, now get the trunk

1
git svn rebase

If you would look at your branches you would see that the test branch is not there in this case !

1
2
3
> git branch -a
* master
remotes/svn/trunk

To get the remote branch to appear you need to

1
git svn fetch

Updating your repository

1
git svn rebase

It will fetch latest changes from the trunk and will try to rebase your work on top of it. If any conflicting changes where made, you’ll have to merge them.

Committing your changes

1
git svn dcommit

Further reading

Who is reading your tests?

As the entire “TDD is dead” revolt is taking place, let’s not talk about the importance of tests nor about the difference between integration, unit and load tests. But instead ask ourselves the following question if we do decide to add tests to our project:

Who is the audience/stakeholder of these tests?

Who could be the audience?

  • Is it the next developer trying to make sense of this unmaintainable puddle of misery?
  • Is it the analyst clicking on the build server’s tests tab?
  • Is it you, after a two year pause, implementing a new feature?
  • Is it you, now, just making sure the implementation behaves as expected?

Given Scenario

A user must be able to log in using his username/password.

This looks like something that could have been taken from an analysis document or could be the description on the post-it on your agile board. Now suppose you are assigned with this task, how would you organize your tests?

Let’s make it as ‘fugly’ as we can:

1
2
3
4
public interface IAuthenticationService
{
bool Authenticate(string user, string password);
}

If we start with the simplest thing we could go for a test class called AuthenticationServiceTests. But that would quickly become a big ball of mud as we start thinking about the possible tests:

  • Authenticating with a user and password returns true if user is known and password matches
  • Authenticating with an unknown user returns false
  • Authenticating with an invalid password returns false
  • Authenticating with and null or empty user throws a businessexception
  • Authenticating with an invalid password should increment the failed logon attempts counter, to make sure we can lock out users after 3 attempts
  • Authenticating with an unconfirmed registered user throws a business exception

How about this?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[TestFixture]
public class When_user_logs_on_using_username_and_password
{
[Test]
public void it_returns_true()
{
}

[Test]
public void it_resets_failed_logon_attempts()
{
}

[Test]
public void it_updates_last_logon()
{
}
}

You could debate about not following standard .net naming conventions, but this clearly specifies our requirement. But it also involves code side effects which are necessary but may have a complete different audience (ref tests 2 and 3). This could mean you may need to organize them in that way:

  • it_resets_failed_logon_attempts is related to locking out users after 3 attempts
  • it_updates_last_logon could be related to tracking we need to do
  • it_returns_true seems a little bit technical, in fact we just express that it worked or was successful, maybe it_indicates_success could be better.

Now what happens if we logon with an unknown user, mmm, this seems database oriented, if the record does not exist in the database, the user can not login. But in real life it could mean that the user just did not register on our site or forgot his logon credentials!

So you might go for this:

1
2
3
4
5
6
7
8
[TestFixture]
public class When_an_unknown_or_unregistered_user_logs_on
{
[Test]
public void it_indicates_failure()
{
}
}

I hope I trickled your mind a bit and that I got the message trough. Expressive tests are hard work but can be very useful.

Just having full code coverage and tests where you need to deep-dive into the code to find out what they are doing may not be the point of your tests.

Tests are a way to express what the code is doing or should be doing ;)

Shouldly: how assertions should have been all along

A couple of years ago i published a comparison post on NUnit vs MsTest. One of the main benefits i felt NUnit had in comparison with MsTest was the richness of the assertion library it included.

Recently i stumbled upon an assertion framework called Shouldly. And i must say i am very impressed by it. If i would ask myself the question again whether i would prefer NUnit over MsTest, my answer would be:

It doesn’t matter as long as we can include Shouldly as assertion library.

What i liked:

  • Very natural API, did not have to second guess the usage (think about Assert.AreGreater)
  • Due to clever inspection of called method or property it even changed the way i named variables in tests, because they could tell part of the story (instead of using var verify all over the place, because it would have made no difference when test failed)
  • Mature and helpful maintainer community (logged a bug, got response same day)

The Shouldly site contains sufficient info to get started. Give it a try, maybe you like it as much as I do!

Hitchhiker's guide to Sql Server

More and more people are taking the ‘There is no database’ statement to it’s limit, i think it’s time to share some insights into how we could let SQLServer help us when investigating performance issues.

Initial Setup

When it comes to performance on SQLServer these are the two most important factors that have a direct impact on system performance:

  • Memory
  • IO

As we are running on a filesystem where disk fragmentation is costly and unavoidable, there are some settings to think about when creating a new database:

  • Initial Size (MB): set this both for data and log to a reasonable size. Don’t start with the defaults of 3 Mb data and 1 Mb log, but instead with about 100 Mb for data and 50 Mb on logs (depending on the recovery model), as increasing the size will impact performance and has a major impact to fragmentation on your hard drive.
  • AutoGrowth: Default is to grow 1 Mb a time, set this to a percentage or at least 1/10 of the initial size.

While running

Remember SQLServer is your friend, using Actual Query plan while executing a query can give you hints about the underlying problem. But in general if you don’t know where to start you could use built-in statistics:

Top 25 Missing indexes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
SELECT TOP 25
dm_mid.database_id AS DatabaseID,
dm_migs.avg_user_impact*(dm_migs.user_seeks+dm_migs.user_scans) Avg_Estimated_Impact,
dm_migs.last_user_seek AS Last_User_Seek,
OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) AS [TableName],
'CREATE INDEX [IX_' + OBJECT_NAME(dm_mid.OBJECT_ID,dm_mid.database_id) + '_'
+ REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.equality_columns,''),', ','_'),'[',''),']','') +
CASE
WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN '_'
ELSE ''
END
+ REPLACE(REPLACE(REPLACE(ISNULL(dm_mid.inequality_columns,''),', ','_'),'[',''),']','')
+ ']'
+ ' ON ' + dm_mid.statement
+ ' (' + ISNULL (dm_mid.equality_columns,'')
+ CASE WHEN dm_mid.equality_columns IS NOT NULL AND dm_mid.inequality_columns IS NOT NULL THEN ',' ELSE
'' END
+ ISNULL (dm_mid.inequality_columns, '')
+ ')'
+ ISNULL (' INCLUDE (' + dm_mid.included_columns + ')', '') AS Create_Statement
FROM sys.dm_db_missing_index_groups dm_mig
INNER JOIN sys.dm_db_missing_index_group_stats dm_migs
ON dm_migs.group_handle = dm_mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details dm_mid
ON dm_mig.index_handle = dm_mid.index_handle
WHERE dm_mid.database_ID = DB_ID()
ORDER BY Avg_Estimated_Impact DESC

This query will output the top 25 missing indexes ordered by estimated impact, look at indexes where the avg estimated impact is above 100000 and are frequently used (Try to avoid the INCLUDE indexes at first).

If the above query does not help you in any way, it could mean that the indexes exist but have been fragmented too much.

Index Fragmentation

1
2
3
4
5
6
7
8
9
10
11
12
SELECT dbschemas.[name] as 'Schema',
dbtables.[name] as 'Table',
dbindexes.[name] as 'Index',
indexstats.avg_fragmentation_in_percent,
indexstats.page_count
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats
INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]
INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]
INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id]
AND indexstats.index_id = dbindexes.index_id
WHERE indexstats.database_id = DB_ID()
ORDER BY indexstats.avg_fragmentation_in_percent desc

Look for indexes that have a fragmentation level higher than 30 % and a high page count (by default on sql server the page size is 8K).

Rebuild Indexes

1
2
ALTER INDEX PK_TABLE ON [dbo].[TABLE]
REBUILD WITH (FILLFACTOR = 90 , STATISTICS_NORECOMPUTE = OFF)

FILLFACTOR?

Usually, the higher the better (max is 100), but it depends on how often the table changes and what the index contains. Two examples:

  • PK on a int identity key, use fill factor 100% as new records are always created at the bottom (normally index fragmentation on these should be low, or a lot of records have been deleted)
  • PK on a guid key, use fill factor depending on how often new records are added (start by 80% or 90%) and monitor page splits to fine tune (see query below)

Monitor Page Splits

1
2
3
4
select Operation, AllocUnitName, COUNT(*) as NumberofIncidents
from ::fn_dblog(null, null)
where Operation = N'LOP_DELETE_SPLIT'
group by Operation, AllocUnitName

Further reading

Javascript by convention

Usually when doing javascript, you’ll see a lot of script inside a page. For instance when we add a date picker on a text input, you could add a script block to the page and do the necessary initialization there:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<h1>Modify your settings</h1>

<form>
<div class="control-group">
<label class="control-label">Birthdate</label>
<div class="controls">
<div class="input-append">
<input type="text" name="birthDate" id="birthDate" />
<span class="add-on">
<i class="icon-calendar"></i>
</span>
</div>
</div>
</div>
</form>

<script type="text/javascript">
$(document).ready(function() {
$("#birthDate").datepicker({
dateFormat: "dd-mm-yy"
});
});
</script>

Now what if we needed to do this more then just once? Let’s change our html to a more unobtrusive way:

1
<input type="text" name="birthDate" id="birthDate" data-role="datepicker" />

Now just add a new javascript file to your website and include it in our html.

1
2
3
4
5
6
7
8
(function($) {
$(document).ready(function() {
//datepicker convention
$(':text[data-role="datepicker"]').datepicker({
dateFormat: "dd-mm-yy"
});
});
})(jQuery);

In this file we make sure we alias ‘$’ to jQuery as another plugin we use could have aliased it differently.

If you are worried about performance of the selector used here, you can always use a class in your convention.