Showing branch name (using colors) on windows subsystem for linux (wsl)


Hello,

The problem:

The default WSL is using bash and even though it supports coloring and customization, the default one is very simple. Since I work with git and I’d like to know which branch I am working on, I’d like to see how to show it in the bash prompt.

The solution (Please notice that I am assuming your installation is using Bash as shell, other shells may be supported, but required some variations in the instructions. )

1- As obvious that it’d sound, first install Windows Subsystem for Linux (WSL)
2- Install Git into the WSL, normally it’s about executing the following command:
sudo apt-get install git

3- Open a terminal Window, and create a file at home. It can be any name. We will use ,git-prompt.sh the content of this file can be download from https://github.com/git/git/blob/master/contrib/completion/git-prompt.sh I tried adding the file content here, but it does not render well 😉

The if statement is optional, however we assume that the file might not be present.
5- Now if you took a look at the file, in the header there is an explanation about how to update a bash or zsh. We will be using bash since we also will support some coloring.

The original script suggest:
PS1='[\u@\h \W$(__git_ps1 " (%s)")]\$ '

However it’s possible in Bash to add support for colors. IF you want to understand better bash command and feature, type the following command in you bash terminal
man bash

Here is a sample of the updated PS1
PS1='\[\033]0;$TITLEPREFIX:$PWD\007\]\n\[\033[32m\]\u@\h \[\033[35m\]$MSYSTEM \[\033[33m\]\w\[\033[36m\]$(__git_ps1 " (%s)")\[\033[0m\]\n$'

The final result will show, the

username@hotname directory (branch)

in the bash promt

 

MSBuild Zip File, SVN Revision after Web Publish


As part of an automation part for deploying an application I’ve spent some time creating MSBuild scripts for simple tasks. The following are the tasks I’ll be providing

  • Create a Zip file from the output
  • Read the SVN Revision from the repository

The code will be using inline tasks, since they can be created almost at any time. The inline tasks, lets the user create tasks using C# (and compile against .NET 4.0), at least by the time I am creating this blog post.

Reading the Revision Number using MSBuild

For this particular MSBuild task, I am depending on a third party assembly, SharpSvn. It is a SVN client implement in C#. The tricky part in this script is about loading external assemblies and calling its functions, all the work is doing thru reflection.

<UsingTask TaskName="SVNRevisionReader"               TaskFactory="CodeTaskFactory"               AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll" >
    <ParameterGroup>
      <SvnPath ParameterType="System.String" Required="true" />
	  <SharpSvnUIDllFilePath ParameterType="System.String" Required="false" />
	  <SvnOutputDirectory ParameterType="System.String" Required="false" />
	  <LastChangeRevision ParameterType="System.Int32" Output="true" />
      <Revision ParameterType="System.Int32" Output="true" />
    </ParameterGroup>
    <Task>
		<Reference Include="Microsoft.CSharp" />
		<Using Namespace="System"/>
		<Using Namespace="System.Diagnostics"/>
		<Using Namespace="System.IO"/>
		<Using Namespace="System.Reflection"/>
		<Code Type="Fragment" Language="cs">
<![CDATA[
            //========================= BEGIN: In-line Task
            Func<string, Assembly> loadAssembly = (string assemblyFilePath) =>
            {
                Assembly assembly = null;
				Log.LogMessage("Loading assembly from :" + assemblyFilePath, MessageImportance.High);
                if (!string.IsNullOrEmpty(assemblyFilePath) && System.IO.File.Exists(assemblyFilePath))
                {
                    assembly = Assembly.UnsafeLoadFrom(assemblyFilePath);
                }
                else
                {
                    Log.LogError("Assembly file path:" + assemblyFilePath + " does not exist.", MessageImportance.High);
                }

                return assembly;
            };

            var sharpSvnAssembly = loadAssembly(@"..\MSBuild\SharpSvn.1.9-x86.1.9004.3913.141\lib\net40\SharpSvn.dll");
            //var sharpSvnUIAssembly = loadAssembly(@"..\MSBuild\SharpSvn.1.9-x86.1.9004.3913.141\lib\net40\SharpSvn.UI.dll");
            var typeOfSharpSvnSvnClient = sharpSvnAssembly.GetType("SharpSvn.SvnClient");
            var typeOfSharpSvnSvnTarget = sharpSvnAssembly.GetType("SharpSvn.SvnTarget");
            var typeOfSharpSvnSvnInfoEventArgs = sharpSvnAssembly.GetType("SharpSvn.SvnInfoEventArgs");

            LastChangeRevision = 0;
            Revision = 0;
            //using (var client = new SharpSvn.SvnClient())
            using (dynamic client = Activator.CreateInstance(typeOfSharpSvnSvnClient))
            {
                //SharpSvn.SvnInfoEventArgs info;
                try
                {
                    //client.GetInfo(SharpSvn.SvnTarget.FromString(SvnPath), out info);
                    dynamic svnTarget = sharpSvnAssembly
                            .GetType(@"SharpSvn.SvnTarget")
                            .GetMethod(@"FromString", BindingFlags.Public | BindingFlags.Static, Type.DefaultBinder, new[] { typeof(string) }, null)
                            .Invoke(null, new object[] { SvnPath });
                    var getInfoArgs = new object[] { svnTarget, null };
                    typeOfSharpSvnSvnClient
                        .GetMethod(@"GetInfo", new[] { typeOfSharpSvnSvnTarget, typeOfSharpSvnSvnInfoEventArgs.MakeByRefType() })
                        .Invoke(client, getInfoArgs);
                    dynamic info = getInfoArgs[1];

                    LastChangeRevision = Convert.ToInt32(info.LastChangeRevision.ToString());
                    Revision = Convert.ToInt32(info.Revision.ToString());
                }
                //catch (SharpSvn.SvnInvalidNodeKindException svnInvalidNodeKindException)
                catch (System.Exception exception)
                {
                    Log.LogError(exception.Message);
                }
            }

			if (!string.IsNullOrEmpty(SvnOutputDirectory) && System.IO.Directory.Exists(SvnOutputDirectory))
			{
				System.IO.File.WriteAllLines(System.IO.Path.Combine(SvnOutputDirectory, "LastChangeRevision.svn"), new[] { LastChangeRevision.ToString() });
				System.IO.File.WriteAllLines(System.IO.Path.Combine(SvnOutputDirectory, "Revision.svn"), new[] { Revision.ToString() });
			}

            Log.LogMessage(string.Concat("LastChangeRevision: ", LastChangeRevision, " Revision: ", Revision), MessageImportance.High);
            //========================= END: In-line Task
]]>
      </Code>
    </Task>
  </UsingTask>

Compressing a directory into a Zip file

The next MSBuild script is about using compressing a directory (normally the output directory after the app has been built).

<UsingTask TaskName="ZipDirectory" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll">
  <ParameterGroup>
	<InputDirectory ParameterType="System.String" Required="true" />
    <OutputFilename ParameterType="System.String" Required="true" />
  </ParameterGroup>
  <Task>
    <Reference Include="System.IO.Compression" />
    <Using Namespace="System.IO.Compression" />
    <Code Type="Fragment" Language="cs">
    <![CDATA[       try       { 		Log.LogMessage(string.Concat("Compressing: ", InputDirectory, " To File: ", OutputFilename), MessageImportance.High); 		 		using (Stream zipStream = new FileStream(Path.GetFullPath(OutputFilename), FileMode.Create, FileAccess.Write)) 		using (ZipArchive archive = new ZipArchive(zipStream, ZipArchiveMode.Create)) 		{ 			foreach(var filePath in System.IO.Directory.GetFiles(InputDirectory,"*.*",System.IO.SearchOption.AllDirectories)) 			{ 				var relativePath = filePath.Replace(InputDirectory,string.Empty); 				using (Stream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read))                 using (Stream fileStreamInZip = archive.CreateEntry(relativePath).Open())                     fileStream.CopyTo(fileStreamInZip); 			} 		} 		 		//System.IO.Compression.ZipFile.CreateFromDirectory(InputDirectory, OutputFilename, System.IO.Compression.CompressionLevel.Fastest, true);         return true;       }       catch (Exception ex)       {         Log.LogErrorFromException(ex);         return false;       }     ]]>
    </Code>
  </Task>
</UsingTask>

The previous code snippets are good enough to get the tasks ready for being used. Depending on the type of project, or the needs the MSBuild tasks can be invoked on different times.
 

Calling the SVN Reader before starting the build process starts

In order call your Task before the build process starts, it can be done by specifying the BeforeTargets attribute, and setting it to: PrepareForBuild

<Target Name="EnsureSVNRevision" BeforeTargets="PrepareForBuild">
    <SVNRevisionReader SvnPath="$(SolutionDir)" SvnOutputDirectory="$(MSBuildProjectDirectory)">
      <Output PropertyName="SvnLastChangeRevision" TaskParameter="LastChangeRevision" />
      <Output PropertyName="SvnRevision" TaskParameter="Revision" />
    </SVNRevisionReader>
  </Target>

 

Calling the Directory Zip Compressor after the Build is over

In the case trying to execute the zip compression after building, normally it can be called in the Target AfterBuild, e.g.:

  <Target Name="AfterBuild"
		  Condition="'$(Configuration)' == 'Development' Or '$(Configuration)' == 'Stage'"
	>
    <MakeDir Directories="$(SolutionDir).Deployments/$(Configuration)/" Condition="!Exists('$(SolutionDir).Deployments/$(Configuration)/')" />
    <Delete Files="$(SolutionDir).Deployments/$(Configuration)/$(ProjectName).zip" Condition="Exists('$(SolutionDir).Deployments/$(Configuration)/$(ProjectName).zip')" />
    <ZipDirectory OutputFilename="$(SolutionDir).Deployments/$(Configuration)/$(ProjectName).zip" InputDirectory="$(TargetDir)\" />
  </Target>

 

However if you are creating a Web Project, normally you don’t take the output directory because it’s slightly different from the published files (which take few more processing). Web Projects are published, In this case, it’s assumed that it’s published to the local file system (where the MSBuild has access to it). Thus, The AfterTargets can be used to detect when files are ready

<Target Name="AfterWebPublish" AfterTargets="WebPublish"
          Condition="'$(Configuration)' == 'Development' Or '$(Configuration)' == 'Stage'"
  >
    <MakeDir Directories="$(SolutionDir).Deployments/$(Configuration)/" Condition="!Exists('$(SolutionDir).Deployments/$(Configuration)/')" />
    <Delete Files="$(SolutionDir).Deployments/$(Configuration)/$(ProjectName).zip" Condition="Exists('$(SolutionDir).Deployments/$(Configuration)/$(ProjectName).zip')" />
    <ZipDirectory OutputFilename="$(SolutionDir).Deployments/$(Configuration)/$(ProjectName).zip" InputDirectory="$(MSBuildProjectDirectory)\$(publishUrl)\" />
    <RemoveDir Directories="$(SolutionDir).Deployments/$(Configuration)/$(ProjectName)/" Condition="Exists('$(SolutionDir).Deployments/$(Configuration)/$(ProjectName)/')" />
  </Target>

With the previous snippets it is possible to customize more in detail when the Tasks should be executed to get the work done.

Cheers,
Herb

Creating PDF Reports from HTML using Dotliquid (Markup) for templates and WkHtmlToXSharp for printing PDF


The problem

Our application needs to create PDF reports. The solution tries to accomplish the following ideas:
– No cost (a free solutions)
– Easy to modify style. (e.g, depending on the change it may not be required to redeploy the binary, and since it is HTML most people knows the basics of it).

REMARKS

– You must review the limitations of the WkHtmlToXSharp wrapper at https://github.com/pruiz/WkHtmlToXSharp .
– The wrapper WkHtmlToXSharp does not expose all the native functionality, thus. If you need native functionality, you are likely to have two options: 1) Do it with another library 2) Fork the code at GitHub, expose what you need, and update your library WkHtmlToXSharp with your modifications.
– DotLiquid Markup takes a close approach in regards to security, thus you have to indicate which items are accessible to it templating system, if security is not a major concern, Razor Engine will work equally or better.

The Solution

The solution follows the following high-level overview of the workflow:

– A data source will provide an object which will be the only data source for our report. On this step it is going to be use dynamic feature of C# and System.Dynamic.ExpandObject because they can produce objects wit
– The templating system will use the data source and a template file(s) to produce a HTML.
– The generated HTML will be prided to the PDF Printer in order to create a neat PDF document

If you want to see the code without much explanation, then see it at GitHub in https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.PdfGeneration

Dot Liquid Markup (Templating System)

In here is worth to mention that the templating system is DotLiquid Markup There are more templating systems. A really good one was Razor Engine which is based on the Razor Syntax when creating Web Applications in .NET. Why did I select DotLiquid over Razor Engine, it was a matter of security. Razor Engine will let you access most of the .NET framework, which I think it’s very convenient but less secure since there are file handling, db access and other this can be done with .NET Framework. On the other hand Dot Liquid uses a its custom syntax to specify the template and it is also closed by default, to access specific features, these items (such class classes or types) must be explicitly added, by default most common types and methods are accessible in DotLiquid.
NOTE: Razor Engine does have support for isolating execution by creating an AppDomain, but I simply didn’t want to take that path.

RazorEngine as well as DotLiquidMarkup are well documented although the latter you will find more specific documentation for the Ruby version than the CSharp port.

Most of the Ruby documentation for Liquid Template is applicable to DotLiquid (obviously using C# instead of Ruby).
http://www.rubydoc.info/gems/liquid/
http://liquidmarkup.org/
https://docs.shopify.com/themes/liquid-documentation/basics

At last DotLiquidMarkup is extensible, in our example we use some extensions (aka filters, since extensions can be of different types) for producing Base64 encoding, Html encoding and ToLocalUri.

DotLiquidMarkup is accessible thru Nuget At https://www.nuget.org/packages/DotLiquid/

WkHtmlToXSharp ( wkhtmltopdf ) (HTML To PDF Printer)

WkHtmlToXSharp is a wrapper for the native open-source library wkhtmltopdf. The native library is quite flexible and robust and it is available in Windows and Unix-like systems. The wrapper WkHtmlToXSharp does not expose all the funcionality of the native library, thus if you need some functionality you are likely to have two options:
1- Fork the project at GitHub, add your customization, and use the your library. (You can do the pull-request if you want to share the changes)
2- Once the PDF is generated, you could use other third party library to perform modifications.

To see the wrapper limitations and capabilities go to
https://github.com/pruiz/WkHtmlToXSharp
To see native library capabilities go to
http://wkhtmltopdf.org/
http://wkhtmltopdf.org/libwkhtmltox/pagesettings.html
https://madalgo.au.dk/~jakobt/wkhtmltoxdoc/wkhtmltopdf-0.9.9-doc.html

The library is accessible thru nuget
https://www.nuget.org/packages/WkHtmlToXSharp/ (Main)
https://www.nuget.org/packages/WkHtmlToXSharp.Win32/ (Win32)
https://www.nuget.org/packages/WkHtmlToXSharp.Win64/ (Win64)

Because of a native library is being used, there are platform specific wrappers. The wrapper already includes the native library, and it decompressed it when the application starts up. This increases the size of the app in memory (and disk), but simplyfies deployment and distribution (A fair trade-off).

Dynamic C#

I hope that by using dynamic the template engine will have the enough independence (and simplicity) in the data source. Normally data sources for reports are unknown structures. You only define the structure when somebody asks you to create the report. In scenarios like the previously explained, dynamic fits well, since we can populate our data objects specifying “on-the-fly” properties without much code or complexity.
See Dynamic in C# 4 for more details. For a more advance use of dynamic see http://blogs.msdn.com/b/csharpfaq/archive/2009/10/19/dynamic-in-c-4-0-creating-wrappers-with-dynamicobject.aspx

Show me the code

Rather than me copying and pasting all project’s code, I’ll share what I consider are the most relevant classes and explain their goals.
The first code, it is just a fake data layer, which provides information for the report to consume. The code is in project MsDotNet.PdfGeneration.Data.

using FakeData;
using System;
using System.Collections.Generic;
using System.Dynamic;
using System.Linq;

namespace PdfGeneration.Data
{
    public class DataProvider
    {
        public dynamic GetReportData()
        {
            // Generating employee list
            var employees = Enumerable
                .Range(0, NumberData.GetNumber(10, 50))
                .Select(i =>
                   {
                       dynamic newEmployee = new ExpandoObject();
                       newEmployee.BirthDate = DateTimeData.GetDatetime(new DateTime(1973, 1, 1), new DateTime(1997, 12, 1));
                       newEmployee.FirstName = NameData.GetFirstName();
                       newEmployee.LastName = NameData.GetSurname();
                       newEmployee.Company = NameData.GetCompanyName();
                       newEmployee.Email = NetworkData.GetEmail();
                       newEmployee.PhoneNumber = PhoneNumberData.GetInternationalPhoneNumber();
                       newEmployee.Address = string.Format("{0} {1} {2}\n{3},{4} {5}", PlaceData.GetStreetName(), PlaceData.GetStreetNumber(), PlaceData.GetAddress(), PlaceData.GetCity(), PlaceData.GetState(), PlaceData.GetZipCode());
                       newEmployee.PersonalQuote = TextData.GetSentences(5);
                       // NOTE: Even though ExpandoObject is compatible with IDictionary<string,object>,
                       //       The template engine only accepts Dictionary<string,object>
                       return new Dictionary<string, object>(newEmployee);
                   })
                .ToList();

            dynamic reportData = new ExpandoObject();
            reportData.Employees = employees;
            return reportData;
        }
    }
}

Once we the data, we should pass it to our Template System (DotLiquid). The template system has three components:
1. The template file(s) (assets files and dot liquid files). (These are at the project MsDotNet.PdfGeneration.Templating
2. The Data. It’s generated by the project MsDotNet.PdfGeneration.Data.
3. The codes which joins data and template files to produce the output. The code for this is at MsDotNet.PdfGeneration.Templating
In this case, first lets take a look at each of them:

{% assign dateFormat = 'MM/dd/yyyy' %}
<!DOCTYPE html>

<html lang="en" xmlns="http://www.w3.org/1999/xhtml">
<head>
    <meta charset="utf-8" />
    <title></title>
    <link href="{{ "Assets/bootstrap.css" | ToLocalUri }}" rel="stylesheet" type="text/css" />
    <style type="text/css">

        @font-face {
            font-family: 'FontAwesome';
            src: local('FontAwesome'), url(data:font/woff;base64,{{ "Assets/fontawesome-webfont.woff" | ToBase64 }}) format('woff');
        }

        @media print {

            .page-break-after {
                page-break-after: always;
            }

            .page-break-before {
                page-break-before: always;
            }
        }

        body {
            height: 297mm;
            width: 210mm;
            margin-left: auto;
            margin-right: auto;
        }
        
    </style>
    <link href="{{ "Assets/fontawesome.css" | ToLocalUri }}" rel="stylesheet" type="text/css" />
</head>
<body>
    <h1>Report Sample</h1>
    <table class="table table-bordered">
    {% for employee in Employees %}
        <tr>
            <td>{{ employee.FirstName | HtmlEncode }} {{ employee.LastName | HtmlEncode }}</td>
            <td>{{ employee.Email | HtmlEncode }}</td>
            <td>{{ employee.PhoneNumber | HtmlEncode }}</td>
            <td>{{ employee.Address | HtmlEncode }}</td>
        </tr>
    {% endfor %}
    </table>
</body>
</html>

Other than the template file, it uses a CSS for styling and woff to provide a custom font. note that HTML syntax is standard, and we can add Dot Liquid syntax into the HTML file to support dynamic content. Please refer to Liquid Market syntax for understanding it.
In the template we are using custom filters,
HtmlEncode: Makes sure that the output is encoded to be displayed in HTML.
ToLocalUri: Converts a given relative path to an absolute path.
ToBase64: Encodes the input (a file or a text) to Base64, This is useful when using Data Uri ( https://en.wikipedia.org/wiki/Data_URI_scheme ). Data Uri are used to embed resources into the HTML, for example images, fonts, etc.

The code which handles the DotLiquidMarkup is at MsDotNet.PdfGeneration.Templating

using System;
using System.Linq;

namespace PdfGeneration.Templating.Extensibility
{
    public static class CustomFilters
    {
        public static string HtmlEncode(object input)
        {
            var htmlInput = input == null ? null : System.Net.WebUtility.HtmlEncode(input.ToString());
            return htmlInput;
        }

        public static string ToBase64(object input, string directory = null)
        {
            directory = directory ?? AppDomain.CurrentDomain.BaseDirectory;
            byte[] buffer = null;
            var inputAsFilePath = (input as string) ?? string.Empty;
            inputAsFilePath = System.IO.Path.Combine(directory, inputAsFilePath);
            if (!string.IsNullOrEmpty(inputAsFilePath) && System.IO.File.Exists(inputAsFilePath))
            {
                buffer = System.IO.File.ReadAllBytes(inputAsFilePath);
            }
            else if (input is System.Collections.Generic.IEnumerable<byte>)
            {
                var inputAsBytes = input as System.Collections.Generic.IEnumerable<byte>;
                buffer = inputAsBytes.ToArray();
            }
            else
            {
                buffer = System.Text.Encoding.Default.GetBytes(input.ToString());
            }

            if (buffer == null)
                return string.Empty;

            var base64String = Convert.ToBase64String(buffer);
            return base64String;
        }

        public static string ToLocalUri(object input, string directory = null)
        {
            directory = directory ?? AppDomain.CurrentDomain.BaseDirectory;
            var inputAsFilePath = (input as string) ?? string.Empty;
            inputAsFilePath = System.IO.Path.Combine(directory, inputAsFilePath);
            var filePathUri = new Uri(inputAsFilePath);
            return filePathUri.ToString();
        }
    }
}
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;

namespace PdfGeneration.Templating
{
    public class TemplateRender
    {
        public TemplateRender()
        {
            Initialize();
        }

        private void Initialize()
        {
            DotLiquid.Template.RegisterFilter(typeof(Extensibility.CustomFilters));
            DotLiquid.Liquid.UseRubyDateFormat = false;
            DotLiquid.Template.NamingConvention = new DotLiquid.NamingConventions.CSharpNamingConvention();
        }

        public void AddKnownType(params Type[] visibleTypes)
        {
            visibleTypes = visibleTypes ?? Enumerable.Empty<Type>().ToArray();
            foreach (var type in visibleTypes)
            {
                var typeProperties = type.GetProperties();
                DotLiquid.Template.RegisterSafeType(type, typeProperties.Select(property => property.Name).ToArray());
            }
        }

        public void RenderTemplate(string templateFilePath, string htmlFilePath, dynamic model)
        {
            using (Stream htmlStream = new FileStream(htmlFilePath, FileMode.OpenOrCreate))
                RenderTemplate(templateFilePath, htmlStream, model);
        }

        public void RenderTemplate(string templateFilePath, Stream htmlStream, dynamic model, bool hasToLeaveStreamOpen = false)
        {
            using (TextWriter htmlTextWriter = new StreamWriter(htmlStream, Encoding.Default, 4096, hasToLeaveStreamOpen))
            {
                RenderTemplate(templateFilePath, htmlTextWriter, model);
            }
        }

        public void RenderTemplate(string templateFilePath, TextWriter htmlTextWriter, dynamic model)
        {
            var template = DotLiquid.Template.Parse(File.ReadAllText(templateFilePath));
            var templateRenderParameters = new DotLiquid.RenderParameters();
            var directorySeparator = Path.DirectorySeparatorChar.ToString();
            var templateDirectory = Path.GetFullPath(
                (templateFilePath.StartsWith(directorySeparator, StringComparison.InvariantCultureIgnoreCase) || templateFilePath.StartsWith("." + directorySeparator, StringComparison.InvariantCultureIgnoreCase))
                ? Path.GetDirectoryName(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, templateFilePath))
                : Path.GetDirectoryName(templateFilePath)
            );

            DotLiquid.Template.FileSystem = new DotLiquid.FileSystems.LocalFileSystem(templateDirectory);
            templateRenderParameters.LocalVariables =
                model is System.Dynamic.ExpandoObject
                ? DotLiquid.Hash.FromDictionary(model as IDictionary<string, object>)
                : DotLiquid.Hash.FromAnonymousObject(model)
            ;
            template.Render(htmlTextWriter, templateRenderParameters);
            htmlTextWriter.Flush();
        }
    }
}

Now, we have the data , and some logic which creates an HTML based on a given data. The next component is at the project MsDotNet.PdfGeneration.PdfPrinting. In here an HTML is received and converted by the low level library as PDF.

using System;
using WkHtmlToXSharp;

namespace PdfGeneration.PdfPrinting
{
    // See more information about the WkHtmlTox
    // http://wkhtmltopdf.org/libwkhtmltox/pagesettings.html
    // https://madalgo.au.dk/~jakobt/wkhtmltoxdoc/wkhtmltopdf-0.9.9-doc.html

    public class HtmlToPdfPrinter
    {
        static HtmlToPdfPrinter()
        {

            WkHtmlToXLibrariesManager.Register(new Win32NativeBundle());
            WkHtmlToXLibrariesManager.Register(new Win64NativeBundle());
        }

        public void Print(string htmlFilePath, string pdfFilePath)
        {
            using (System.IO.Stream pdfStreamWriter = System.IO.File.OpenWrite(pdfFilePath))
            using (var multiplexingConverter = GetDefaultConverter(
                setUpAction: m => m.ObjectSettings.Page = new Uri(htmlFilePath).ToString()
            ))
            {
                var pdfBytes = multiplexingConverter.Convert();
                pdfStreamWriter.Write(pdfBytes, 0, pdfBytes.Length);
                pdfStreamWriter.Flush();
            }
        }

        public void Print(System.IO.Stream htmlStream, System.IO.Stream pdfStream)
        {
            using (System.IO.TextReader htmlReader = new System.IO.StreamReader(htmlStream))
            {
                Print(htmlReader, pdfStream);
            }
        }

        public void Print(System.IO.TextReader htmlReader, System.IO.Stream pdfStream)
        {
            var htmlContent = htmlReader.ReadToEnd();
            Print(htmlContent, pdfStream);
        }

        public void Print(string htmlContent, System.IO.Stream pdfStream)
        {
            using (var multiplexingConverter = GetDefaultConverter())
            {
                var pdfBytes = multiplexingConverter.Convert(htmlContent);
                pdfStream.Write(pdfBytes, 0, pdfBytes.Length);
                pdfStream.Flush();
            }
        }

        private IHtmlToPdfConverter GetDefaultConverter(Action<IHtmlToPdfConverter> setUpAction = null)
        {
            var multiplexingConverter = new MultiplexingConverter();
            multiplexingConverter.ObjectSettings.Web.PrintMediaType = true;
            multiplexingConverter.GlobalSettings.Margin.Top = "1.25cm";
            multiplexingConverter.GlobalSettings.Margin.Bottom = "1.25cm";
            multiplexingConverter.GlobalSettings.Margin.Left = "1.25cm";
            multiplexingConverter.GlobalSettings.Margin.Right = "1.25cm";

            multiplexingConverter.ObjectSettings.Load.BlockLocalFileAccess = false;
            multiplexingConverter.ObjectSettings.Web.LoadImages = true;
            multiplexingConverter.ObjectSettings.Web.PrintMediaType = true;

            if (setUpAction != null)
                setUpAction(multiplexingConverter);
            return multiplexingConverter;
        }
    }
}

NOTE Please notice that WkHtmlToXSharp requires you to register the native dll, to make sure this happens once the previous code does it at the static constructor.

Now we have everything we need to generate PDF report. A goal partial on this example, was to hide the libraries, in a way that you could replace components (e.g. you want to use razor engine instead of DotLiquidMarkup). For doing this, each project exposes only primitive and built-in types in .NET. Thus, to communicate different modules and dealing with text files, it has been use the most common types: String, TextWriter,TextReader and Stream.

See all the components working together at the main app:

using PdfGeneration.Data;
using PdfGeneration.PdfPrinting;
using PdfGeneration.Templating;
using System;

namespace PdfGeneration
{
    class Program
    {
        static void Main(string[] args)
        {
            var dataProvider = new DataProvider();
            var templateRender = new TemplateRender();
            var htmlToPdfPrinter = new HtmlToPdfPrinter();
            templateRender.AddKnownType();
            var workingDirectory = AppDomain.CurrentDomain.BaseDirectory;
            var pdfFilePath = System.IO.Path.Combine(workingDirectory, @"Report.pdf");
            var templateFilePath = System.IO.Path.Combine(workingDirectory, @"Assets/Report.html");
            var templateDirectoryPath = System.IO.Path.GetDirectoryName(templateFilePath);

            if (System.IO.File.Exists(pdfFilePath))
                System.IO.File.Delete(pdfFilePath);

            dynamic reportData = dataProvider.GetReportData();

            #region Printing Using Stream
            using (System.IO.Stream htmlStream = new System.IO.MemoryStream())
            {
                templateRender.RenderTemplate(templateFilePath, htmlStream, reportData, hasToLeaveStreamOpen: true);
                htmlStream.Seek(0, System.IO.SeekOrigin.Begin);
                using (var pdfStreamWriter = System.IO.File.OpenWrite(pdfFilePath))
                {
                    htmlToPdfPrinter.Print(htmlStream, pdfStreamWriter);
                }
            }
            #endregion

            //#region Printing Using StringBuilder
            //var htmlStringBuilder = new StringBuilder();
            //using (System.IO.TextWriter htmlTextWriter = new System.IO.StringWriter(htmlStringBuilder))
            //{
            //    templateRender.RenderTemplate(templateFilePath, htmlTextWriter, reportData);
            //}
            //using (var pdfStreamWriter = System.IO.File.OpenWrite(pdfFilePath))
            //{
            //    var htmlContent = htmlStringBuilder.ToString();
            //    htmlToPdfPrinter.Print(htmlContent, pdfStreamWriter);
            //}
            //#endregion

            System.Diagnostics.Process.Start(pdfFilePath);
        }
    }
}

I have to admit that this problem is very interesting to resolve and there are many alternatives, my approach strives to be simple and cheap. There are remarks which should be taken into account. Here is a simple result (without any effort on the CSS part)
MsDotNet PdfGeneration Capture

The code is at
https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.PdfGeneration

Happy coding!

Using JQuery DataTables , Entity Framework (EF) and Dynamic LINQ in conjunction


The Problem

Few months ago I started using DataTables (A plugin for JQuery in order to create neat tables in HTML). However, I noticed that twice I wrote code kind of similar for addressing the same issue. Thus with this project I’d like to set a base for extensible code which deals with the common request and response work related to DataTable API. The goal is to quickly provide an endpoint which provides data compatible with DataTable data source.
All the filtering, sorting and projection should happen at server side.

For this solution I’ll use the following components:
DataTables see https://www.datatables.net/
Dynamic LINQ see http://dynamiclinq.azurewebsites.net/

My project is inspired by https://github.com/kendo-labs/dlinq-helpers. Kendo Grid is a very robust table for HTML.

Remarks

– Filtering logic has not been taking into consideration, mainly because I didn’t find a common mechanism for receive filter information at server side. If this is a requirement, I will consider to mimic Kendo UI protocol for sending filtering information.
– Order and Search have been implemented
– Implementations for Array of Array and Array of Objects have been provided, however the developer is responsible of calling the proper methods.

The Solution

First of all The idea is to create an extensible module. At this time the filtering logic is pending to really have a useful. Thus I will start by showing the code of the DataTables module. The following code holds the models used by the model these classes are basically POCOS to be used to receive or send information.

using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// Holds the information required for a given column
    /// </summary>
    [DataContract]
    public class ColumnRequest
    {
        /// <summary>
        /// Specific search information for the column
        /// </summary>
        [DataMember(Name = "search")]
        public SearchRequest Search { get; set; }

        /// <summary>
        /// Column's data source, as defined by columns.data.
        /// </summary>
        [DataMember(Name = "data")]
        public string Data { get; set; }

        /// <summary>
        /// Column's name, as defined by columns.name.
        /// </summary>
        [DataMember(Name = "name")]
        public string Name { get; set; }

        /// <summary>
        /// Flag to indicate if this column is search-able (true) or not (false). This is controlled by columns.searchable.
        /// </summary>
        [DataMember(Name = "searchable")]
        public bool? Searchable { get; set; }

        /// <summary>
        /// Flag to indicate if this column is orderable (true) or not (false). This is controlled by columns.orderable.
        /// </summary>
        [DataMember(Name = "orderable")]
        public bool? Orderable { get; set; }

    }
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// 
    /// </summary>
    [DataContract]
    public class SearchRequest
    {
        /// <summary>
        /// Search value
        /// </summary>
        [DataMember(Name = "value")]
        public string Value { get; set; }

        /// <summary>
        /// true if the search value should be treated as a regular expression for advanced searching, false otherwise. 
        /// </summary>
        [DataMember(Name = "regex")]
        public bool Regex { get; set; }

        public virtual string Operator { get; set; }

        public SearchRequest()
        {
            Operator = @"contains";
        }

        private static readonly IDictionary<string, string> operators = new Dictionary<string, string>
        {
            {"eq", "="},
            {"neq", "!="},
            {"lt", "<"},
            {"lte", "<="},
            {"gt", ">"},
            {"gte", ">="},
            {"startswith", "StartsWith"},
            {"endswith", "EndsWith"},
            {"contains", "Contains"},
            {"doesnotcontain", "Contains"}
        };

        public virtual string ToExpression(DataTableRequest request)
        {
            if (Regex)
                throw new NotImplementedException("Regular Expression is not implemented");

            string comparison = operators[Operator];
            List<string> expressions = new List<string>();
            foreach (var searchableColumn in request.Columns.Where(c => c.Searchable.HasValue && c.Searchable.Value))
            {
                if (Operator == "doesnotcontain")
                {
                    expressions.Add(string.Format("!{0}.ToString().{1}(\"{2}\")", searchableColumn.Data, comparison, Value));
                }
                else if (comparison == "StartsWith" || comparison == "EndsWith" || comparison == "Contains")
                {
                    expressions.Add(string.Format("{0}.ToString().{1}(\"{2}\")", searchableColumn.Data, comparison, Value));
                }
                else
                {
                    expressions.Add(string.Format("{0} {1} \"{2}\"", searchableColumn.Data, comparison, Value));
                }
            }
            return string.Join(" or ", expressions);
        }
    }
}

using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// Contains the information about the sort request
    /// </summary>
    [DataContract]
    public class OrderRequest
    {
        /// <summary>
        /// Indicates the orientation of the sort "asc" for ascending or "desc" for desc
        /// </summary>
        [DataMember(Name = "dir")]
        public string Dir { get; set; }

        /// <summary>
        /// Column which contains the number of column which requires this sort.
        /// </summary>
        [DataMember(Name = "column")]
        public int? Column { get; set; }

        public string ToExpression(DataTableRequest request)
        {
            return string.Concat(request.Columns[Column.Value].Data, " ", Dir);
        }
    }
}


using System.Collections;
using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// Encapsulates a Data Table response format
    /// </summary>
    /// <typeparam name="TData"></typeparam>
    [DataContract]
    public class DataTableResponse 
    {
        /// <summary>
        /// The draw counter that this object is a response to - from the draw parameter sent as part of the data request. Note that it is strongly recommended for security reasons that you cast this parameter to an integer, rather than simply echoing back to the client what it sent in the draw parameter, in order to prevent Cross Site Scripting (XSS) attacks.
        /// </summary>
        [DataMember(Name = "draw")]
        public int? Draw { get; set; }

        /// <summary>
        /// The data to be displayed in the table. 
        /// <remarks>
        /// This is an array of data source objects, one for each row, which will be used by DataTables. 
        /// Note that this parameter's name can be changed using the ajax option's dataSrc property.
        /// </remarks>
        /// </summary>
        [DataMember(Name = "data")]
        public IEnumerable Data { get; set; }

        /// <summary>
        /// Total records, before filtering (i.e. the total number of records in the database)
        /// </summary>
        [DataMember(Name = "recordsTotal")]
        public int? RecordsTotal { get; set; }

        /// <summary>
        /// Total records, after filtering (i.e. the total number of records after filtering has been applied - not just the number of records being returned for this page of data).
        /// </summary>
        [DataMember(Name = "recordsFiltered")]
        public int? RecordsFiltered { get; set; }

        /// <summary>
        /// Optional: If an error occurs during the running of the server-side processing script, you can inform the user of this error by passing back the error message to be displayed using this parameter. Do not include if there is no error.
        /// </summary>
        [DataMember(Name = "error")]
        public string Error { get; set; }
    }
}

using System.Collections.Generic;
using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// This class holds the minimum amount of information provided by DataTable to perform a request on server side.
    /// </summary>
    [DataContract]
    public class DataTableRequest
    {
        /// <summary>
        /// Column data request description
        /// </summary>
        [DataMember(Name = "columns")]
        public List<ColumnRequest> Columns { get; set; }

        /// <summary>
        /// Column requested order description
        /// </summary>
        [DataMember(Name = "order")]
        public List<OrderRequest> Order { get; set; }

        /// <summary>
        /// Global search value. To be applied to all columns which have search-able as true.
        /// </summary>
        [DataMember(Name = "search")]
        public SearchRequest Search { get; set; }

        /// <summary>
        /// Paging first record indicator. This is the start point in the current data set (0 index based - i.e. 0 is the first record).
        /// </summary>
        [DataMember(Name = "start")]
        public int? Start { get; set; }

        /// <summary>
        /// Number of records that the table can display in the current draw. It is expected that the number of records returned will be equal to this number, unless the server has fewer records to return. 
        /// </summary>
        [DataMember(Name = "length")]
        public int? Length { get; set; }

        /// <summary>
        /// Draw counter. This is used by DataTables to ensure that the Ajax returns from server-side processing requests are drawn in sequence by DataTables (Ajax requests are asynchronous and thus can return out of sequence). 
        /// This is used as part of the draw return parameter. 
        /// </summary>
        [DataMember(Name = "draw")]
        public int? Draw { get; set; }
    }
}

They way I have decided to implement logic is through extension methods.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Dynamic;
using DataTables.Models;

namespace DataTables.Extensions
{
    public static class IQueryableExtensions
    {
        /// <summary>
        /// Computes a DataTables response based on the request. This response is compatible with an array of arrays
        /// </summary>
        /// <remarks>
        /// This response is compatible with an array of arrays
        /// </remarks>
        /// <typeparam name="TEntity">Entity to be returned in the data</typeparam>
        /// <param name="query">Initial data source. It will be part of the response after computations</param>
        /// <param name="request">Holds information about the request</param>
        /// <param name="getEntityFieldNamesFunc">Function which provides a sequence of fields of the entity to be used</param>
        /// <param name="getEntityAsEnumerableFunc">Function which converts an entity to an array</param>
        /// <returns></returns>
        public static DataTableResponse GetDataTableResponse<TEntity>(this IEnumerable<TEntity> query, DataTableRequest request,
            Func<TEntity, IEnumerable<string>> getEntityFieldNamesFunc, Func<TEntity, System.Collections.IEnumerable> getEntityAsEnumerableFunc)
        {
            // When using Arrays set the columns data before performing operations.
            SetColumnData(request, getEntityFieldNamesFunc);

            var response = GetDataTableResponse(ref query, request);

            // Converts the result compatible with an array (required by default for DataTables)
            response.Data = query.Select(getEntityAsEnumerableFunc);
            return response;
        }

        /// <summary>
        /// Computes a DataTables response based on the request. This response is compatible with an array of objects.
        /// </summary>
        /// <remarks>
        /// This response is compatible with an array of objects.
        /// </remarks>
        /// <typeparam name="TEntity">Entity to be returned in the data</typeparam>
        /// <param name="query">Initial data source. It will be part of the response after computations</param>
        /// <param name="request">Holds information about the request</param>
        /// <returns></returns>
        public static DataTableResponse GetDataTableResponse<TEntity>(this IEnumerable<TEntity> query, DataTableRequest request)
        {
            // When using Arrays set the columns data before performing operations.
            var response = GetDataTableResponse(ref query, request);

            // Converts the result compatible with an array (required by default for DataTables)
            response.Data = query;
            return response;
        }

        private static DataTableResponse GetDataTableResponse<TEntity>(ref IEnumerable<TEntity> query, DataTableRequest request)
        {
            var response = new DataTableResponse();
            // Setting up response
            response.Draw = request.Draw;
            response.RecordsTotal = query.Count();

            // sorting
            query = query.OrderBy(string.Join(",", request.Order.Select(o => o.ToExpression(request))));

            // search
            if (request.Search != null && !string.IsNullOrEmpty(request.Search.Value))
            {
                query = query.Where(request.Search.ToExpression(request));
            }

            // filtering


            // Counting results
            response.RecordsFiltered = query.Count();

            // Returning page (subset of filtered records)
            if (request.Start.HasValue && request.Start.Value > 0)
                query = query.Skip(request.Start.Value);
            if (request.Length.HasValue && request.Length.Value >= 0)
                query = query.Take(request.Length.Value);
            return response;
        }

        private static void SetColumnData<TEntity>(DataTableRequest request, Func<TEntity, IEnumerable<string>> func)
        {
            SetColumnData(request, func(default(TEntity)).ToArray());
        }

        private static void SetColumnData(DataTableRequest request, params string[] fields)
        {
            for (int index = 0; index < fields.Length && index < request.Columns.Count; index++)
            {
                var fieldIndex = 0;
                if (int.TryParse(request.Columns[index].Data, out fieldIndex))
                    request.Columns[index].Data = fields[fieldIndex];
            }
        }
    }
}

The previous code takes advantage Dynamic LINQ to compute queries, and using as input the request information. With the previous code in place, we can use in our Controller the following code:

using DataTableSample.Services;
using DataTables.Models;
using DataTableSample.Models;
using System.Linq.Dynamic;
using DataTables.Extensions;

namespace DataTableSample.Controllers
{
    public class EmployeesController : ApiController
    {
        private CompanyModel db = new CompanyModel();

        // GET: api/Employees
        public async Task<IHttpActionResult> GetEmployees([FromUri]DataTableRequest request)
        {
            var isUsingArrayOfArrays = request.Columns.Where(c => { var n = 0; return int.TryParse(c.Data, out n); }).Any();
            var response = isUsingArrayOfArrays
                ? db.Employees.AsNoTracking().AsQueryable().GetDataTableResponse(request, GetEntityFieldNames, GetEntityAsEnumerable)
                : db.Employees.AsNoTracking().AsQueryable().GetDataTableResponse(request)
                ;
            return Json(response);
        }

        /// <summary>
        /// Converts the Entity to a sequence of values (an Array)
        /// </summary>
        /// <param name="e"></param>
        /// <returns></returns>
        private System.Collections.IEnumerable GetEntityAsEnumerable(Employee e)
        {
            yield return e.FirstName;
            yield return e.LastName;
            yield return e.Position;
            yield return e.Office;
            yield return e.StartDate;
            yield return e.Salary;
        }

        /// <summary>
        /// Provides the name of the Columns used in the Array
        /// </summary>
        /// <param name="e"></param>
        /// <returns></returns>
        private IEnumerable<string> GetEntityFieldNames(Employee e)
        {
            yield return nameof(e.FirstName);
            yield return nameof(e.LastName);
            yield return nameof(e.Position);
            yield return nameof(e.Office);
            yield return nameof(e.StartDate);
            yield return nameof(e.Salary);
        }
    }
}

The last portion of code is

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.9/css/jquery.dataTables.min.css" />
    <meta charset="utf-8" />
</head>
<body>
    <h1>Using Objects</h1>
    <table id="example1" class="display" cellspacing="0" style="width:100%;">
        <thead>
            <tr>
                <th>First name</th>
                <th>Last name</th>
                <th>Position</th>
                <th>Office</th>
                <th>Start date</th>
                <th>Salary</th>
            </tr>
        </thead>
        <tfoot>
            <tr>
                <th>First name</th>
                <th>Last name</th>
                <th>Position</th>
                <th>Office</th>
                <th>Start date</th>
                <th>Salary</th>
            </tr>
        </tfoot>
    </table>
    <h1>Using Arrays</h1>
    <table id="example2" class="display" cellspacing="0" style="width:100%;">
        <thead>
            <tr>
                <th>First name</th>
                <th>Last name</th>
                <th>Position</th>
                <th>Office</th>
                <th>Start date</th>
                <th>Salary</th>
            </tr>
        </thead>
        <tfoot>
            <tr>
                <th>First name</th>
                <th>Last name</th>
                <th>Position</th>
                <th>Office</th>
                <th>Start date</th>
                <th>Salary</th>
            </tr>
        </tfoot>
    </table>
</body>
</html>
<script src="Scripts/jquery-1.7.js"></script>
<script src="Scripts/DataTables/jquery.dataTables.js"></script>
<script type="text/javascript">
    $(document).ready(function () {
        $('#example1').DataTable({
            "processing": true,
            "serverSide": true,
            "ajax": "api/Employees",
            "columns": [
                { "data": "FirstName" },
                { "data": "LastName" },
                { "data": "Position" },
                { "data": "Office" },
                { "data": "StartDate" },
                { "data": "Salary" }
            ]
        });

        $('#example2').DataTable({
            "processing": true,
            "serverSide": true,
            "ajax": "api/Employees"
        });
    });
</script>

By running the application the two tables will be populated, one of them will be using Array of Arrays and the second will be using an Array of objects.

The full source code can be downloaded from:
https://github.com/hmadrigal/playground-dotnet/tree/master/MsWebApi.DataTables

Cheers,
Herb

Generating like-JIRA tickets using Entity Framework (EF6) and MS SQL (TSQL)


The problem

We have a Master/Detail relationship with two entities, and we want to identify detail records by using a friendly ID. The friendly ID is based on a master code (which it is specified in the Master Table) and a Detail Number (which works like a counter on each given master code). For example

Master Table ( When IsAutoGen is 1, then Details should auto generate an friendly ID using the AutoGenPrefix)

Id          IsAutoGen AutoGenPrefix
----------- --------- -------------
1           1         GRPA
2           1         GRPB
5           0         NULL

Detail table AutoGenPreix-AutoGenNumber identify uniquely each ticket by providing a friendly-auto-Incremental Id.

Id          MasterId    AutoGenPrefix AutoGenNumber
----------- ----------- ------------- -------------
361         1           GRPA          1
362         2           GRPB          1
363         5           NULL          1
364         1           GRPA          2
365         2           GRPB          2
366         5           NULL          2
367         1           GRPA          3
368         2           GRPB          3

Remarks

Certainly this problem may have many different solutions. For example it is technically possible to do all the computations only using C# and Entity Framework, thread locks and database transactions to support concurrency. However by doing it so, the developer will be responsible for dealing with concurrency, it will lock the database for longer periods since changes happens at business logic layer. Thus, this approach attempts to delegate the concurrency handling task to the database engine and reduce the lock time when possible. This solution is very particular to Entity Framework and Microsoft SQL, although ORMs and Triggers are concepts available in most of the platforms nowadays.

The solution

Just in case, you have skipped the test before, I encourage you to read the remarks. That said, First lets create our sample tables by using the following T-SQL script:

CREATE TABLE [dbo].[Master] (
    [Id]            INT         IDENTITY (1, 1) NOT NULL,
    [IsAutoGen]     BIT         CONSTRAINT [DF_Master_IsAutoGen] DEFAULT ((0)) NOT NULL,
    [AutoGenPrefix] VARCHAR (5) NULL,
    CONSTRAINT [PK_Master] PRIMARY KEY CLUSTERED ([Id] ASC)
);
GO
CREATE TABLE [dbo].[Detail] (
    [Id]            INT         IDENTITY (1, 1) NOT NULL,
    [MasterId]      INT         NOT NULL,
    [AutoGenPrefix] VARCHAR (5) NULL,
    [AutoGenNumber] INT         NULL,
    CONSTRAINT [PK_Detail] PRIMARY KEY CLUSTERED ([Id] ASC),
    CONSTRAINT [FK_Detail_Master] FOREIGN KEY ([MasterId]) REFERENCES [dbo].[Master] ([Id])
);
GO

As simple as two tables, in order to establish a master-detail relationship. Now with the same ease, lets see the suggested trigger:

CREATE TRIGGER  [dbo].[Insert_Detail_ReadableId]
ON [dbo].[Detail]
INSTEAD OF INSERT
AS
BEGIN
   SET NOCOUNT OFF;
	INSERT INTO [Detail] ([MasterId],[AutoGenPrefix], [AutoGenNumber])
	SELECT 
		I.[MasterId] AS [MasterId],
		M2.AutoGenPrefix AS [AutoGenPrefix],
		ISNULL(ROW_NUMBER() OVER(ORDER BY I.[Id]) + M.[AutoGenNumber],1) AS [AutoGenNumber]
	FROM INSERTED  I
	LEFT JOIN [Master] M2 ON I.[MasterId] = M2.[Id]
	LEFT JOIN (
		SELECT 
			[MasterId],
			MAX(ISNULL([AutoGenNumber],0)) AS [AutoGenNumber]
		FROM [AutoGenDB].[dbo].[Detail]
		GROUP BY [MasterId]
	) M ON I.[MasterId] = M.[MasterId]

	SELECT [Id] FROM [Detail] WHERE @@ROWCOUNT > 0 AND [Id] = SCOPE_IDENTITY();
END

Notice that the trigger is responsible of performing the INSERT since it has been configured to be INSTEAD OF INSERT. The triggers uses the INSERTED variable to hold a table with all the records that should have been inserted. Thus the application performs a INSERT INTO using as data source the INSERTED table, the inserted max AutoGenNumber per AutoGenPrefix and a row number to be added to the Max AutoGenNumber.
Pay attention to the last line, which returns the inserted ID, this is required by Entity Framework to detect as successful the INSERT operation (see StackOverflow | OptimisticConcurrencyException — SQL 2008 R2 Instead of Insert Trigger with Entity Framework or StackOverflow | error when inserting into table having instead of trigger from entity data framework )

This is all we need in the TSQL level. Every time we attempt to INSERT a new record, right after the INSERT request, our code will perform the inserted by computing some additional data.

In our C# code, I’ll assume we’re using an EDMX file and using DB first approach. Thus, we usually only update our DbContext by updating our models from Visual Studio (VS).

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations.Schema;
using System.Data.Entity.Infrastructure;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace AutoGenDB
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var context = new AutoGenDB.AutoGenDBEntities())
            {
                // if master table is empty, then it inserts some master records.
                if (!context.Masters.Any())
                {
                    for (int masterIndex = 0; masterIndex < 5; masterIndex++)
                    {
                        var isAutoGenEnabled = masterIndex % 2 == 0;
                        var newMaster = new Master { IsAutoGen = isAutoGenEnabled, AutoGenPrefix = isAutoGenEnabled ? "GRP" + Convert.ToChar((masterIndex % 65) + 65) : default(string) };
                        context.Masters.Add(newMaster);
                    }
                    context.SaveChanges();
                }

                // Inserts some details on different groups
                List<Detail> details = new List<Detail>();
                List<Master> masters = context.Masters.AsNoTracking().ToList();
                for (int i = 0; i < 100; i++)
                {
                    var masterId = masters[i % masters.Count].Id;
                    details.Add(new Detail() { MasterId = masterId });
                }
                context.Details.AddRange(details);
                context.SaveChanges();

                // Prints 10 auto generated IDS
                foreach (var master in context.Masters.AsNoTracking().Take(3))
                    foreach (var detail in master.Details.Take(5))
                        Console.WriteLine(master.IsAutoGen ? "Ticket:{0}-{1}" : @"N/A", detail.AutoGenPrefix, detail.AutoGenNumber);
            }
            Console.ReadKey();
        }
    }
}

The previous C# code inserts some records and prints the information generated by the Trigger. At this point there is only one missed component.
If we run the code as it is. The application won’t load the auto generated values, this is because Entity Framework does not know that these columns were populated. An option is to instruct Entity Framework about when it should reload the entities. The following code, decorates the Detail class with an IDetail interface which can be used in the DbContext to identify the entity type that should be reloaded.

using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace AutoGenDB
{
    interface IDetail
    {
        int? AutoGenNumber { get; set; }
        string AutoGenPrefix { get; set; }
        int Id { get; set; }
        Master Master { get; set; }
        int MasterId { get; set; }
    }

    public partial class AutoGenDBEntities : DbContext
    {
        public override int SaveChanges()
        {
            var entriesToReload = ChangeTracker.Entries<IDetail>().Where(e => e.State == EntityState.Added).ToList();
            int rowCount = base.SaveChanges();
            if (rowCount > 0 && entriesToReload.Count > 0)
                entriesToReload.ForEach(e => e.Reload());
            return rowCount;
        }

    }

    public partial class Detail : AutoGenDB.IDetail
    {

    }
}

Please notice how the subclass of the DbContext is overriding the SaveChanges method in order to request to reload the instances of IDetail. Reloading entities may or may not be desirable in depending on the application.
With these overrides, the DbContext will know when a given entity should be reloaded.

As usual the code sample can be check at https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.EntityFrameworkAndTriggers

How embed fonts in Windows Phone 8 using a WebBrowser control


Hello,

I hope I will be sharing some ideas from my last project. This in particular was difficult.

The problem

I have a font I want to embed into my Windows Phone Application, and I want to use in a WebBrowser Control. I have managed to embed the font and use it with standard textblocks control, however I haven’t get it working with the WebBrowser control.

The Solution

Using Data Url Scheme for embedding the font ( http://en.wikipedia.org/wiki/Data_URI_scheme ). Then let the WebBrowser control to consume the font.

First things I learned it’s that a Data Url can contains basically data (e.g.images, fonts, more files. it’s really handy). Moreover, I learned that CSS can consume this data, which makes a good option to embed the font and use it off line. There may be few bumps, such as using Base64 encoding, but one we get the rhythm it won’t hurt more.

1) In your window phone project select add the font file. Here is a remark, make sure the file format is compatible with Windows Phone (TTF files) or the Windows Phone Web Browser (WOFF). TTF may require some work to get them working , for example enabling the flag for allow the file to be embedded, or making sure that its internal file structure is compatible with WP platform.

2) When embedding the file, make sure that the file properties are set: Build Action to Content and Copy to Output Directory to Copy if newer.

3) Prepare some HTML for the setting up the font. In my project sample, I’ll be using Navigate to String method and to keep HTML code apart from my C# code, I’ll be loading an external file from the XAP package, and applying some text replacements with Regular Expressions.

4) In out sample check MainPage.xaml.cs to see how the FileManager to apply the custom font.

I haven’t tried, but I think this approach could be extended to inject a custom font to an external page. Here is the output:

Image

The code sample is at

https://github.com/hmadrigal/playground-dotnet/tree/master/MsWinPhone.EmbedFont

I hope you could find this handy,

Cheers,

Herb

After moving some files my Windows Phone Project Throws a XamlParseException


I am not sure if this is an known issue for Microsoft, but at least I am aware of this. However this is the first time I am gonna document it. Since it can be very frustrating

The Problem

I want to reorganize the files in my Windows Phone Project in order to follow our implementation of MVVM, moreover we have decided that resx file will live in their own assembly project. However, once I move the Resource file to an external assembly the application just thrown an XamlParseException even though the reference in XAML is totally correct.

REMARKS
XamlParseExceptions may be thrown by other reasons. It is important to realize that I knew I moved the RESX file to a different assembly and before that everything was working. Certainly I updated the reference to the new assembly, and the exception was still being thrown. That is why this is a tricky issue.

The solution

It took some time to me to noticed, but somehow the project that contains the RESX file cannot contain a dot (.) character in its assembly name. If it happens then XAML parse will throw a XamlParserException even though the reference to the assembly is correct. The error message may contain something like this:

System.Windows.Markup.XamlParseException occurred
  HResult=-2146233087
  Message=Unknown parser error: Scanner 2147500037. [Line: 10 Position: 126]
  Source=System.Windows
  LineNumber=10
  LinePosition=126
  StackTrace:
       at System.Windows.Application.LoadComponent(Object component, Uri resourceLocator)
       at MyApp.App.InitializeComponent()
       at MyApp.App..ctor()
  InnerException: 

Taking a look at the location where I was loading the Resource it was this:

<Application
    x:Class="MyApp.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
    xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone">

    <!--Application Resources-->
    <Application.Resources>
        <local:LocalizedStrings xmlns:local="clr-namespace:MyApp;assembly=MyApp.WindowsPhone8.Resources" x:Key="LocalizedStrings"/>
    </Application.Resources>

    <Application.ApplicationLifetimeObjects>
        <!--Required object that handles lifetime events for the application-->
        <shell:PhoneApplicationService
            Launching="Application_Launching" Closing="Application_Closing"
            Activated="Application_Activated" Deactivated="Application_Deactivated"/>
    </Application.ApplicationLifetimeObjects>

</Application>

The line one of my projects generates an assembly named: ‘MyApp.WindowsPhone8.Resources’, in order to resolve the issue I only have to update the generated assembly name to be ‘MyApp_WindowsPhone8_Resources’ and then update the proper reference in the XAML. For example:

        <local:LocalizedStrings xmlns:local="clr-namespace:MyApp;assembly=MyApp_WindowsPhone8_Resources" x:Key="LocalizedStrings"/>

After performing this change your app should work normally.

Pivot ( or Panorama ) Index Indicator


Hello,

The Problem:

Creating a pivot (or panorama) indicator could be a challenging task. The control should:

  • Indicate in which item of the collection you are looking at
  • Let you tap on a given item and navigate in the panorama to that given item
  • Let you customize the look and feel of the items.

The Solution: A Second thought

Sounds like the perfect scenario for a custom control, and in some of the cases it is. However, after some minutes thinking about this idea, I realized that the ListBox supports most of these requirements but only one pending task: it has to interact properly with the Panorama or Pivot.  Thus, the current solution uses a ListBox (Custom Styles and Templates) for modifying the look and file, and prepares a behavior (more specifically a TargetedTriggeredAction<T1,T2> ) for letting the ListBox interact with a index based collection (e.g. Pivot, Panorama, ListBox, etc … ).

A behavior …  (What’s that?)

Well with XAML a bunch of new concepts arrived to .NET development world. One in particular which is very useful is a behavior . You can think about a behavior like a encapsulated functionality that can be reused on different context under the same type of items.  The most popular behavior i guess it is EventToCommand which can be applied to any FrameworkElement and it maps the Loaded Event to a Command when implementing a View Model.

Thus, since We have a ListBox already in place, we only want it to behave synchronized with a Pivot or Panorama. Thus, the external control will be a parameter for our behavior.

    [TypeConstraint(typeof(Selector))]
    public class SetSelectedItemAction : TargetedTriggerAction
    {
        private Selector _selector;

        protected override void OnAttached()
        {
            base.OnAttached();

            _selector = AssociatedObject as Selector;
            if (_selector == null)
                return;
        }

        protected override void OnDetaching()
        {
            base.OnDetaching();
            if (_selector == null)
                return;
            _selector = null;
        }

        protected override void Invoke(object parameter)
        {
            if (_selector == null)
                return;

            if (Target is Panorama)
                InvokeOnPanorama(Target as Panorama);
            if (Target is Pivot)
                InvokeOnPivot(Target as Pivot);
            if (Target is Selector)
                InvokeOnSelector(Target as Selector);
        }

        private void InvokeOnSelector(Selector selector)
        {
            if (selector == null)
                return;
            selector.SelectedIndex = _selector.SelectedIndex;
        }

        private void InvokeOnPivot(Pivot pivot)
        {
            if (pivot == null)
                return;
            pivot.SelectedIndex = _selector.SelectedIndex;
        }

        private void InvokeOnPanorama(Panorama panorama)
        {
            if (panorama == null)
                return;
            panorama.DefaultItem = panorama.Items[_selector.SelectedIndex];
        }

The idea is that you should be able to sync other elements by just dropping this behavior on a ListBox and setting up few properties, for example:

<ListBox
            x:Name="listbox"
			HorizontalAlignment="Stretch" Margin="12" VerticalAlignment="Top"
            SelectedIndex="{Binding SelectedIndex,ElementName=panorama,Mode=TwoWay}"
            ItemsSource="{Binding PanoramaItems}"
            ItemsPanel="{StaticResource HorizontalPanelTemplate}"
            ItemContainerStyle="{StaticResource ListBoxItemStyle1}"
            ItemTemplate="{StaticResource IndexIndicatorDataTemplate}"
            >
            <i:Interaction.Triggers>
                <i:EventTrigger EventName="SelectionChanged">
                    <local:SetSelectedItemAction TargetObject="{Binding ElementName=panorama}"/>
                </i:EventTrigger>
            </i:Interaction.Triggers>
        </ListBox>

The behaviors can take more responsibility other than keep sync the selected item in one way. However my design decision was to use other mechanisms for keeping sync the indexes. (e.g. you could appreciate a TwoWay binding using SelectedIndex Property of the Target Pivot or Panorama).

Also, the ListBox has bound a list of items (empty items) just for being able to generate as much items as the Panorama has. This is other logic that may be moved to the behavior (however in my two previous attempts it didn’t work that well).

Here is the code sample, of this simple behavior and how ti keeps sync with the Panorama.

https://github.com/hmadrigal/playground-dotnet/tree/master/MsWinPhone.ParanoramaIndex

The following screenshot shows an indicator in the top/left corner where you can see the current tab of the Panorama that is being shown. Moreover, the user can tap on a given item to get directly to that selected tab.

Panorama_Index_Indicator

Cheers
Herb

Custom Configuration files in .NET


Hi,

I would like to share some ideas about how to deal with configuration files on .NET. I am sure that there should be many options for implementing configuration files. Some are more likely to better than others and others may depend on the platform (e.g. mobile, desktop, cloud, etc..)  In particular I’ll explain an approach that should be working for your Web Application and for your Desktop application.

The problem:

We want to quickly create a configuration file, since we foresee that our application will have a good amount of settings.

Solution:

I’ll take advantage of the default .NET mechanism for configuration files. It’s plenty flexible and quite extensible. The major drawback is that you’ll have to write code, and sometimes a not-so-easy code. Alternatively you can relay on the existent section handlers, and try to use them when possible.

A brief talk about configuration files

.NET configuration files supports hierarchies and they are extensible. Almost all the cases with defaults is fairly enough, for creating custom sections there I will be explaining three approaches, but certainly there are more. The decision of which approach to take it will depend of how much time you have and also if the team has the will and chance for installing at least the Configuration Section Designer. 

1) Create Custom Section Handler with Code Snippets

This is the simplest one, it’s basically get code snippets of how to create sections for the SectionHandler class. Then you are more likely to have many property of similar times (probably primitive types) but it can safe time for producing the class and understand it. I won’t be providing samples for this.

2) Using the Custom Section Designer from CodePlex at https://csd.codeplex.com/

This is my favorite, but unfortunately it demands that you will have to install a Visual Studio Extension for opening configuration section projects. The best of this approach is that you can get:

  • XSD validation + intelisense
  • XSD Documentation
  • Ease to modify content.
  • etc…

First get proper installer for the extension https://csd.codeplex.com/. Then create a configuration section project (these projects outputs a dll), by conversion these projects end with “.Configuration” for example “MyApp.Configuration” will be dll project that loads the configuration section. In our sample it is provided a project called: ConfigurationFileSample.Configuration

Image

The previous screenshot exemplifies a Section in which it has been defined a Element called Mappings, which is a collection of Mapping elements. By following the instructions at https://csd.codeplex.com/wikipage?title=Defining%20new%20types&referringTitle=Usage you will easily create this configuration. The magic arrive when you are typing these values into visual studio (or any XML editor that supports XSD validation property).

Once you are done designing your configuration file. You save and compile. If it success then add a configuration file to your main project, and also add a reference to the configuration file project. for referencing our custom section we do it into the configSections element. For example:


<!-- NOTE: In here a custom section is specified, this section has been created by writing code--></pre>
<section></section>
<pre>

Visual studio will enable help toltips and XSD validation for you. As you can see in the following screenshot

ConfigurationTooltip

For loading configuration files in code, after adding the reference to the dll, the app can use an static helper of simply use the GetSection method.

For example for loading the section using the default instance:

// NOTE: It is also possible to load the default custom section
Console.WriteLine("\nLoading default Minestrone thru singleton!");
var minestroneSection = Minestrone.MinestroneSection.Instance;
PrintMinestrone(minestroneSection);

Or also you could load a specific section:

// NOTE: You could ask for a given custom section
Console.WriteLine("\nLoading Minestrone by manually specifying a section: minestroneSection");
var minestroneSection = ConfigurationManager.GetSection("minestroneSection") as Minestrone.MinestroneSection;
PrintMinestrone(minestroneSection);

3) Using one of the build in Section Handlers
As I mentioned you also could you the built-in Sectionhandlers from .NET framework There is a list of them at the end of this page: http://msdn.microsoft.com/en-us/library/system.configuration.configurationsection.aspx (basically all the subclasses of System.Configuration.ConfigurationSection )

For example for referencing it into the configuration file:


    <!-- NOTE:This custom section uses .NET framework sections instead, see http://msdn.microsoft.com/en-us/library/system.configuration.configurationsection.aspx for a list of the classes available into the framework --></pre>
<section></section>
<pre>

and for loading it from code:

// NOTE: Loads a custom section, but it uses a .NET built-in class
            Console.WriteLine("\nLoading a ApplicationSettings using a built-in .NET type. (more at http://msdn.microsoft.com/en-us/library/system.configuration.configurationsection.aspx ) ");
            var myConfigSection = ConfigurationManager.GetSection("myConfigSection") as System.Collections.Specialized.NameValueCollection;
            for (int appSettingIndex = 0; appSettingIndex < myConfigSection.Count; appSettingIndex++)
            {
                Console.WriteLine(string.Format("Key:{0} Value:{1}", myConfigSection.AllKeys[appSettingIndex], myConfigSection[appSettingIndex]));
            }
            Console.WriteLine("\n\n PRESS ANY KEY TO FINISH! ");
            Console.ReadKey();

We are almost there, but WAIT!!!!

I know that at this time I haven’t shown how to specify section. Well basically because you can provide the details of the section inline into the same file or specifcying an external file using the configSource attribute (which is available for any custom section).

For example see how the minestroneSection is defined in an external file, and how myConfigSection  is written inline into the configuration file. There are advantages of each approach (e.g. when you want to apply XSLT transformations you may want to apply transformations to a simple file rather than a long and complex XML) o perhaps you may have a config file per environment (e.g. minestrone.debug.config…,)

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <!-- NOTE: In here a custom section is specified, this section has been created by writing code-->
    <section name="minestroneSection" type="Minestrone.MinestroneSection, ConfigurationFileSample.Configuration, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/>

    <!-- NOTE:This custom section uses .NET framework sections instead, see http://msdn.microsoft.com/en-us/library/system.configuration.configurationsection.aspx for a list of the classes available into the framework -->
    <section name="myConfigSection" type="System.Configuration.AppSettingsSection, System.Configuration, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />

  </configSections>
  <myConfigSection>
    <add key ="ALPHA" value="1|1"/>
    <add key ="BETA" value="1|1"/>
    <add key ="GAMA" value="1|1"/>
  </myConfigSection>

  <!-- NOTE: Loading section from a external file-->
  <minestroneSection configSource="minestroneSection.config" />

  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
  </startup>
</configuration>

I hope this is good enough for setting up quickly custom configuration files in your .NET projects.
The sample is at https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.CustomConfigFile and please remember to install the Custom Section Designer if you want to take the option for more detailed configuration file.

Regards,
Herber