Embedding Leadtools License Files

dicomI have my colleague Jamie to thank for this particular blog. I have recently come to migrate a legacy project written using Leadtools v14.5 to v19 of the Medical Imaging Suite and so during a conversation with Jamie he showed me a nice way of embedding the Leadtools licence and developer key within the .NET application. I’m sure it’s not going to bring about world peace or halt global warning but it might stop you prematurely looking like the guy above without the aid of a CT scanner. In all seriousness its just a nice tidy way of handling the licensing, So with out further ado:-

1. Right click on your project and select ‘Properties’

2. Select the ‘Resources’ tab. You may need to create a resources if you don’t already have one using the very self explanatory link on the properties tab.

3. Open the drop down toolbar menu as shown below and select ‘Files’Properties1

4. Add the two files you wish to add, the Developers Key (DEV_KEY) and the License Key (LIC_KEY) using the ‘Add Resource’ toolbar item from the following screen

Properties2

5. OK. We’re now ready to add some code. Here is the function that I created to License the Leadtools libraries ready for use:-

License

The following lines of code are the ones we are really interested in:-

 byte[] licenseBuffer = Resources.LIC_KEY;
 string developerkey = Encoding.Default.GetString(Resources.DEV_KEY);
 RasterSupport.SetLicense(licenseBuffer, developerkey);

The first line of course reads the license key into a byte array, the second reads the developer key into a string and the third key sets the License using these two values. Nice and simple. The rest of my function purely deals with knowing if this function has been called before as this code may be referenced in many different places and we only need to set the license once.

As an aside if you are wondering about the verbosity of my variable and method naming it s because I am giving this ‘No Comments’ thing a go. I am starting to come round after many years to agree that commenting code may actually be a BAD thing. For a fuller description of this follow this link here which is the sadly not the article that prompted me to try it out. The main crux of my argument is that Code is code and is maintained, the intent rarely changes and when it does the programmer always does so to the best of their ability at that time. Comments however are written once, mostly poorly, and are NEVER maintained and thus as the program lives and breathes and evolves the comments stay as they were when written; a testament to good intentions, a museum piece bearing no resemblance to reality.

So… we’ll see how it works out, I feel a blog coming on!

Advertisements

Faster Image Processsing (aka Lock your Bits).

colorful pixelsWhilst working on a small new project to write a console application that examines images for certain kinds of colour data I started, as I always do, with looking at how best to achieve this in a performant manner for the client. There is nothing worse than an application  that appears to hang whilst an invisible piece of processing occurs. True, in this instance it was a console app that would be automagically scheduled for use and so this was not such a design issue but I still think it’s good practice. The main remit of the application was to iterate over every single pixel within an image file and to perform various calculations against the colour data that we obtained,  the calculations were fairly simple and thus set in stone and so no real performance gains could be made out of a refactor, however this business of iterating over every pixel….

What of course springs to mind in the first instance is the easiest and quickest option and is demonstrated by the following code:-

    for (int y = 0; y < img.Height; y++){

      for (int x = 0; x < img.Width; x++){

        Color clr2 = img.GetPixel(x, y);

        if (clr2 == Color.Blue){

          Console.WriteLine("The colour was blue!");

      }

    }

  }

Job done, send it to the client…. But is that really the best way? If the client is processing huge amounts of very large files then very quickly your seemingly amazing turnaround of a solution for their issues seems ill thought out. A little more digging and I came across the Lockbits/UnlockBits methods of the bitmap object which I must admit that until this point I had not heard of. Please see this excellent blog if you need to know the precise mechanics of how this works.

http://www.bobpowell.net/lockingbits.htm

Effectively LockBits gives you direct access within memory to the underlying data that the image is composed of, access to the data is not via managed code instead using pointers and so needs to be marked as unsafe within the code, in addition the application itself needs to also allow unsafe code blocks to be run which can be done via the build tab within the project properties. After a little refactoring and we are presented with the following revisted code which performs the same job as the earlier snippet:-

      BitmapData picData = img.LockBits(new Rectangle(0, 0, img.Width, img.Height), ImageLockMode.ReadOnly, img.PixelFormat);

      try {

      /* Now ascertain the pixel size......*/

        int pixelSize = -1;
        switch (picData.PixelFormat) {
          case PixelFormat.Format32bppArgb: { pixelSize = 4; break; }
          case PixelFormat.Format24bppRgb: { pixelSize = 3; break; }
          case PixelFormat.Format8bppIndexed: { pixelSize = 1; break; }
        }

        if (pixelSize <= 0) {
          throw new FormatException("Pixel format is unsupported or could not be ascertained");
        }

      /* OK. Iterate over the Pixels */

        for (int y = 0; y < picData.Height; y++) {

      /* OK. As we are using unmanaged memory we need to mark the code segment as unsafe...*/

          unsafe { //AllowUnsafeCode

      /* Now obtain a pointer to the current row of data for the three bitmpas that we are processing
      The source image we access read only whilst we are writing the other two and thus need read-write
      access. */

          byte* row = (byte*)picData.Scan0 + (y * picData.Stride);

      /* Iterate over the width of the image */

          for (int x = 0; x < picData.Width; x++) {
            int[] array = { row[(x * pixelSize) + BLUE], row[(x * pixelSize) + GREEN], row[(x * pixelSize) + RED] };
            Color clr2 = Color.FromArgb(array[RED], array[GREEN], array[BLUE]);
            if(clr2==Color.Blue){
              Console.WriteLine("The colour was blue!");
            }
          }
        }
      }

      /* Whatever happens ensure that we unlock the images once processing has completed. */
     finally {
        try {
          img.UnlockBits(picData);
        } catch { }
     }


The proof of the pudding is of course in the eating, and this one tastes good. Against my 854 x 640 image using the original method the timings come in at 1.35 seconds per image, not exactly racing red especially when compared to the LockBits processing time which, even with all that extra set up code came in at under a 0.08 seconds which is in the region of 16 times faster.