c#

Using JQuery DataTables , Entity Framework (EF) and Dynamic LINQ in conjunction


The Problem

Few months ago I started using DataTables (A plugin for JQuery in order to create neat tables in HTML). However, I noticed that twice I wrote code kind of similar for addressing the same issue. Thus with this project I’d like to set a base for extensible code which deals with the common request and response work related to DataTable API. The goal is to quickly provide an endpoint which provides data compatible with DataTable data source.
All the filtering, sorting and projection should happen at server side.

For this solution I’ll use the following components:
DataTables see https://www.datatables.net/
Dynamic LINQ see http://dynamiclinq.azurewebsites.net/

My project is inspired by https://github.com/kendo-labs/dlinq-helpers. Kendo Grid is a very robust table for HTML.

Remarks

– Filtering logic has not been taking into consideration, mainly because I didn’t find a common mechanism for receive filter information at server side. If this is a requirement, I will consider to mimic Kendo UI protocol for sending filtering information.
– Order and Search have been implemented
– Implementations for Array of Array and Array of Objects have been provided, however the developer is responsible of calling the proper methods.

The Solution

First of all The idea is to create an extensible module. At this time the filtering logic is pending to really have a useful. Thus I will start by showing the code of the DataTables module. The following code holds the models used by the model these classes are basically POCOS to be used to receive or send information.

using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// Holds the information required for a given column
    /// </summary>
    [DataContract]
    public class ColumnRequest
    {
        /// <summary>
        /// Specific search information for the column
        /// </summary>
        [DataMember(Name = "search")]
        public SearchRequest Search { get; set; }

        /// <summary>
        /// Column's data source, as defined by columns.data.
        /// </summary>
        [DataMember(Name = "data")]
        public string Data { get; set; }

        /// <summary>
        /// Column's name, as defined by columns.name.
        /// </summary>
        [DataMember(Name = "name")]
        public string Name { get; set; }

        /// <summary>
        /// Flag to indicate if this column is search-able (true) or not (false). This is controlled by columns.searchable.
        /// </summary>
        [DataMember(Name = "searchable")]
        public bool? Searchable { get; set; }

        /// <summary>
        /// Flag to indicate if this column is orderable (true) or not (false). This is controlled by columns.orderable.
        /// </summary>
        [DataMember(Name = "orderable")]
        public bool? Orderable { get; set; }

    }
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// 
    /// </summary>
    [DataContract]
    public class SearchRequest
    {
        /// <summary>
        /// Search value
        /// </summary>
        [DataMember(Name = "value")]
        public string Value { get; set; }

        /// <summary>
        /// true if the search value should be treated as a regular expression for advanced searching, false otherwise. 
        /// </summary>
        [DataMember(Name = "regex")]
        public bool Regex { get; set; }

        public virtual string Operator { get; set; }

        public SearchRequest()
        {
            Operator = @"contains";
        }

        private static readonly IDictionary<string, string> operators = new Dictionary<string, string>
        {
            {"eq", "="},
            {"neq", "!="},
            {"lt", "<"},
            {"lte", "<="},
            {"gt", ">"},
            {"gte", ">="},
            {"startswith", "StartsWith"},
            {"endswith", "EndsWith"},
            {"contains", "Contains"},
            {"doesnotcontain", "Contains"}
        };

        public virtual string ToExpression(DataTableRequest request)
        {
            if (Regex)
                throw new NotImplementedException("Regular Expression is not implemented");

            string comparison = operators[Operator];
            List<string> expressions = new List<string>();
            foreach (var searchableColumn in request.Columns.Where(c => c.Searchable.HasValue && c.Searchable.Value))
            {
                if (Operator == "doesnotcontain")
                {
                    expressions.Add(string.Format("!{0}.ToString().{1}(\"{2}\")", searchableColumn.Data, comparison, Value));
                }
                else if (comparison == "StartsWith" || comparison == "EndsWith" || comparison == "Contains")
                {
                    expressions.Add(string.Format("{0}.ToString().{1}(\"{2}\")", searchableColumn.Data, comparison, Value));
                }
                else
                {
                    expressions.Add(string.Format("{0} {1} \"{2}\"", searchableColumn.Data, comparison, Value));
                }
            }
            return string.Join(" or ", expressions);
        }
    }
}

using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// Contains the information about the sort request
    /// </summary>
    [DataContract]
    public class OrderRequest
    {
        /// <summary>
        /// Indicates the orientation of the sort "asc" for ascending or "desc" for desc
        /// </summary>
        [DataMember(Name = "dir")]
        public string Dir { get; set; }

        /// <summary>
        /// Column which contains the number of column which requires this sort.
        /// </summary>
        [DataMember(Name = "column")]
        public int? Column { get; set; }

        public string ToExpression(DataTableRequest request)
        {
            return string.Concat(request.Columns[Column.Value].Data, " ", Dir);
        }
    }
}


using System.Collections;
using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// Encapsulates a Data Table response format
    /// </summary>
    /// <typeparam name="TData"></typeparam>
    [DataContract]
    public class DataTableResponse 
    {
        /// <summary>
        /// The draw counter that this object is a response to - from the draw parameter sent as part of the data request. Note that it is strongly recommended for security reasons that you cast this parameter to an integer, rather than simply echoing back to the client what it sent in the draw parameter, in order to prevent Cross Site Scripting (XSS) attacks.
        /// </summary>
        [DataMember(Name = "draw")]
        public int? Draw { get; set; }

        /// <summary>
        /// The data to be displayed in the table. 
        /// <remarks>
        /// This is an array of data source objects, one for each row, which will be used by DataTables. 
        /// Note that this parameter's name can be changed using the ajax option's dataSrc property.
        /// </remarks>
        /// </summary>
        [DataMember(Name = "data")]
        public IEnumerable Data { get; set; }

        /// <summary>
        /// Total records, before filtering (i.e. the total number of records in the database)
        /// </summary>
        [DataMember(Name = "recordsTotal")]
        public int? RecordsTotal { get; set; }

        /// <summary>
        /// Total records, after filtering (i.e. the total number of records after filtering has been applied - not just the number of records being returned for this page of data).
        /// </summary>
        [DataMember(Name = "recordsFiltered")]
        public int? RecordsFiltered { get; set; }

        /// <summary>
        /// Optional: If an error occurs during the running of the server-side processing script, you can inform the user of this error by passing back the error message to be displayed using this parameter. Do not include if there is no error.
        /// </summary>
        [DataMember(Name = "error")]
        public string Error { get; set; }
    }
}

using System.Collections.Generic;
using System.Runtime.Serialization;

namespace DataTables.Models
{
    /// <summary>
    /// This class holds the minimum amount of information provided by DataTable to perform a request on server side.
    /// </summary>
    [DataContract]
    public class DataTableRequest
    {
        /// <summary>
        /// Column data request description
        /// </summary>
        [DataMember(Name = "columns")]
        public List<ColumnRequest> Columns { get; set; }

        /// <summary>
        /// Column requested order description
        /// </summary>
        [DataMember(Name = "order")]
        public List<OrderRequest> Order { get; set; }

        /// <summary>
        /// Global search value. To be applied to all columns which have search-able as true.
        /// </summary>
        [DataMember(Name = "search")]
        public SearchRequest Search { get; set; }

        /// <summary>
        /// Paging first record indicator. This is the start point in the current data set (0 index based - i.e. 0 is the first record).
        /// </summary>
        [DataMember(Name = "start")]
        public int? Start { get; set; }

        /// <summary>
        /// Number of records that the table can display in the current draw. It is expected that the number of records returned will be equal to this number, unless the server has fewer records to return. 
        /// </summary>
        [DataMember(Name = "length")]
        public int? Length { get; set; }

        /// <summary>
        /// Draw counter. This is used by DataTables to ensure that the Ajax returns from server-side processing requests are drawn in sequence by DataTables (Ajax requests are asynchronous and thus can return out of sequence). 
        /// This is used as part of the draw return parameter. 
        /// </summary>
        [DataMember(Name = "draw")]
        public int? Draw { get; set; }
    }
}

They way I have decided to implement logic is through extension methods.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Dynamic;
using DataTables.Models;

namespace DataTables.Extensions
{
    public static class IQueryableExtensions
    {
        /// <summary>
        /// Computes a DataTables response based on the request. This response is compatible with an array of arrays
        /// </summary>
        /// <remarks>
        /// This response is compatible with an array of arrays
        /// </remarks>
        /// <typeparam name="TEntity">Entity to be returned in the data</typeparam>
        /// <param name="query">Initial data source. It will be part of the response after computations</param>
        /// <param name="request">Holds information about the request</param>
        /// <param name="getEntityFieldNamesFunc">Function which provides a sequence of fields of the entity to be used</param>
        /// <param name="getEntityAsEnumerableFunc">Function which converts an entity to an array</param>
        /// <returns></returns>
        public static DataTableResponse GetDataTableResponse<TEntity>(this IEnumerable<TEntity> query, DataTableRequest request,
            Func<TEntity, IEnumerable<string>> getEntityFieldNamesFunc, Func<TEntity, System.Collections.IEnumerable> getEntityAsEnumerableFunc)
        {
            // When using Arrays set the columns data before performing operations.
            SetColumnData(request, getEntityFieldNamesFunc);

            var response = GetDataTableResponse(ref query, request);

            // Converts the result compatible with an array (required by default for DataTables)
            response.Data = query.Select(getEntityAsEnumerableFunc);
            return response;
        }

        /// <summary>
        /// Computes a DataTables response based on the request. This response is compatible with an array of objects.
        /// </summary>
        /// <remarks>
        /// This response is compatible with an array of objects.
        /// </remarks>
        /// <typeparam name="TEntity">Entity to be returned in the data</typeparam>
        /// <param name="query">Initial data source. It will be part of the response after computations</param>
        /// <param name="request">Holds information about the request</param>
        /// <returns></returns>
        public static DataTableResponse GetDataTableResponse<TEntity>(this IEnumerable<TEntity> query, DataTableRequest request)
        {
            // When using Arrays set the columns data before performing operations.
            var response = GetDataTableResponse(ref query, request);

            // Converts the result compatible with an array (required by default for DataTables)
            response.Data = query;
            return response;
        }

        private static DataTableResponse GetDataTableResponse<TEntity>(ref IEnumerable<TEntity> query, DataTableRequest request)
        {
            var response = new DataTableResponse();
            // Setting up response
            response.Draw = request.Draw;
            response.RecordsTotal = query.Count();

            // sorting
            query = query.OrderBy(string.Join(",", request.Order.Select(o => o.ToExpression(request))));

            // search
            if (request.Search != null && !string.IsNullOrEmpty(request.Search.Value))
            {
                query = query.Where(request.Search.ToExpression(request));
            }

            // filtering


            // Counting results
            response.RecordsFiltered = query.Count();

            // Returning page (subset of filtered records)
            if (request.Start.HasValue && request.Start.Value > 0)
                query = query.Skip(request.Start.Value);
            if (request.Length.HasValue && request.Length.Value >= 0)
                query = query.Take(request.Length.Value);
            return response;
        }

        private static void SetColumnData<TEntity>(DataTableRequest request, Func<TEntity, IEnumerable<string>> func)
        {
            SetColumnData(request, func(default(TEntity)).ToArray());
        }

        private static void SetColumnData(DataTableRequest request, params string[] fields)
        {
            for (int index = 0; index < fields.Length && index < request.Columns.Count; index++)
            {
                var fieldIndex = 0;
                if (int.TryParse(request.Columns[index].Data, out fieldIndex))
                    request.Columns[index].Data = fields[fieldIndex];
            }
        }
    }
}

The previous code takes advantage Dynamic LINQ to compute queries, and using as input the request information. With the previous code in place, we can use in our Controller the following code:

using DataTableSample.Services;
using DataTables.Models;
using DataTableSample.Models;
using System.Linq.Dynamic;
using DataTables.Extensions;

namespace DataTableSample.Controllers
{
    public class EmployeesController : ApiController
    {
        private CompanyModel db = new CompanyModel();

        // GET: api/Employees
        public async Task<IHttpActionResult> GetEmployees([FromUri]DataTableRequest request)
        {
            var isUsingArrayOfArrays = request.Columns.Where(c => { var n = 0; return int.TryParse(c.Data, out n); }).Any();
            var response = isUsingArrayOfArrays
                ? db.Employees.AsNoTracking().AsQueryable().GetDataTableResponse(request, GetEntityFieldNames, GetEntityAsEnumerable)
                : db.Employees.AsNoTracking().AsQueryable().GetDataTableResponse(request)
                ;
            return Json(response);
        }

        /// <summary>
        /// Converts the Entity to a sequence of values (an Array)
        /// </summary>
        /// <param name="e"></param>
        /// <returns></returns>
        private System.Collections.IEnumerable GetEntityAsEnumerable(Employee e)
        {
            yield return e.FirstName;
            yield return e.LastName;
            yield return e.Position;
            yield return e.Office;
            yield return e.StartDate;
            yield return e.Salary;
        }

        /// <summary>
        /// Provides the name of the Columns used in the Array
        /// </summary>
        /// <param name="e"></param>
        /// <returns></returns>
        private IEnumerable<string> GetEntityFieldNames(Employee e)
        {
            yield return nameof(e.FirstName);
            yield return nameof(e.LastName);
            yield return nameof(e.Position);
            yield return nameof(e.Office);
            yield return nameof(e.StartDate);
            yield return nameof(e.Salary);
        }
    }
}

The last portion of code is

<!DOCTYPE html>
<html>
<head>
    <title></title>
    <link rel="stylesheet" type="text/css" href="https://cdn.datatables.net/1.10.9/css/jquery.dataTables.min.css" />
    <meta charset="utf-8" />
</head>
<body>
    <h1>Using Objects</h1>
    <table id="example1" class="display" cellspacing="0" style="width:100%;">
        <thead>
            <tr>
                <th>First name</th>
                <th>Last name</th>
                <th>Position</th>
                <th>Office</th>
                <th>Start date</th>
                <th>Salary</th>
            </tr>
        </thead>
        <tfoot>
            <tr>
                <th>First name</th>
                <th>Last name</th>
                <th>Position</th>
                <th>Office</th>
                <th>Start date</th>
                <th>Salary</th>
            </tr>
        </tfoot>
    </table>
    <h1>Using Arrays</h1>
    <table id="example2" class="display" cellspacing="0" style="width:100%;">
        <thead>
            <tr>
                <th>First name</th>
                <th>Last name</th>
                <th>Position</th>
                <th>Office</th>
                <th>Start date</th>
                <th>Salary</th>
            </tr>
        </thead>
        <tfoot>
            <tr>
                <th>First name</th>
                <th>Last name</th>
                <th>Position</th>
                <th>Office</th>
                <th>Start date</th>
                <th>Salary</th>
            </tr>
        </tfoot>
    </table>
</body>
</html>
<script src="Scripts/jquery-1.7.js"></script>
<script src="Scripts/DataTables/jquery.dataTables.js"></script>
<script type="text/javascript">
    $(document).ready(function () {
        $('#example1').DataTable({
            "processing": true,
            "serverSide": true,
            "ajax": "api/Employees",
            "columns": [
                { "data": "FirstName" },
                { "data": "LastName" },
                { "data": "Position" },
                { "data": "Office" },
                { "data": "StartDate" },
                { "data": "Salary" }
            ]
        });

        $('#example2').DataTable({
            "processing": true,
            "serverSide": true,
            "ajax": "api/Employees"
        });
    });
</script>

By running the application the two tables will be populated, one of them will be using Array of Arrays and the second will be using an Array of objects.

The full source code can be downloaded from:
https://github.com/hmadrigal/playground-dotnet/tree/master/MsWebApi.DataTables

Cheers,
Herb

Generating like-JIRA tickets using Entity Framework (EF6) and MS SQL (TSQL)


The problem

We have a Master/Detail relationship with two entities, and we want to identify detail records by using a friendly ID. The friendly ID is based on a master code (which it is specified in the Master Table) and a Detail Number (which works like a counter on each given master code). For example

Master Table ( When IsAutoGen is 1, then Details should auto generate an friendly ID using the AutoGenPrefix)

Id          IsAutoGen AutoGenPrefix
----------- --------- -------------
1           1         GRPA
2           1         GRPB
5           0         NULL

Detail table AutoGenPreix-AutoGenNumber identify uniquely each ticket by providing a friendly-auto-Incremental Id.

Id          MasterId    AutoGenPrefix AutoGenNumber
----------- ----------- ------------- -------------
361         1           GRPA          1
362         2           GRPB          1
363         5           NULL          1
364         1           GRPA          2
365         2           GRPB          2
366         5           NULL          2
367         1           GRPA          3
368         2           GRPB          3

Remarks

Certainly this problem may have many different solutions. For example it is technically possible to do all the computations only using C# and Entity Framework, thread locks and database transactions to support concurrency. However by doing it so, the developer will be responsible for dealing with concurrency, it will lock the database for longer periods since changes happens at business logic layer. Thus, this approach attempts to delegate the concurrency handling task to the database engine and reduce the lock time when possible. This solution is very particular to Entity Framework and Microsoft SQL, although ORMs and Triggers are concepts available in most of the platforms nowadays.

The solution

Just in case, you have skipped the test before, I encourage you to read the remarks. That said, First lets create our sample tables by using the following T-SQL script:

CREATE TABLE [dbo].[Master] (
    [Id]            INT         IDENTITY (1, 1) NOT NULL,
    [IsAutoGen]     BIT         CONSTRAINT [DF_Master_IsAutoGen] DEFAULT ((0)) NOT NULL,
    [AutoGenPrefix] VARCHAR (5) NULL,
    CONSTRAINT [PK_Master] PRIMARY KEY CLUSTERED ([Id] ASC)
);
GO
CREATE TABLE [dbo].[Detail] (
    [Id]            INT         IDENTITY (1, 1) NOT NULL,
    [MasterId]      INT         NOT NULL,
    [AutoGenPrefix] VARCHAR (5) NULL,
    [AutoGenNumber] INT         NULL,
    CONSTRAINT [PK_Detail] PRIMARY KEY CLUSTERED ([Id] ASC),
    CONSTRAINT [FK_Detail_Master] FOREIGN KEY ([MasterId]) REFERENCES [dbo].[Master] ([Id])
);
GO

As simple as two tables, in order to establish a master-detail relationship. Now with the same ease, lets see the suggested trigger:

CREATE TRIGGER  [dbo].[Insert_Detail_ReadableId]
ON [dbo].[Detail]
INSTEAD OF INSERT
AS
BEGIN
   SET NOCOUNT OFF;
	INSERT INTO [Detail] ([MasterId],[AutoGenPrefix], [AutoGenNumber])
	SELECT 
		I.[MasterId] AS [MasterId],
		M2.AutoGenPrefix AS [AutoGenPrefix],
		ISNULL(ROW_NUMBER() OVER(ORDER BY I.[Id]) + M.[AutoGenNumber],1) AS [AutoGenNumber]
	FROM INSERTED  I
	LEFT JOIN [Master] M2 ON I.[MasterId] = M2.[Id]
	LEFT JOIN (
		SELECT 
			[MasterId],
			MAX(ISNULL([AutoGenNumber],0)) AS [AutoGenNumber]
		FROM [AutoGenDB].[dbo].[Detail]
		GROUP BY [MasterId]
	) M ON I.[MasterId] = M.[MasterId]

	SELECT [Id] FROM [Detail] WHERE @@ROWCOUNT > 0 AND [Id] = SCOPE_IDENTITY();
END

Notice that the trigger is responsible of performing the INSERT since it has been configured to be INSTEAD OF INSERT. The triggers uses the INSERTED variable to hold a table with all the records that should have been inserted. Thus the application performs a INSERT INTO using as data source the INSERTED table, the inserted max AutoGenNumber per AutoGenPrefix and a row number to be added to the Max AutoGenNumber.
Pay attention to the last line, which returns the inserted ID, this is required by Entity Framework to detect as successful the INSERT operation (see StackOverflow | OptimisticConcurrencyException — SQL 2008 R2 Instead of Insert Trigger with Entity Framework or StackOverflow | error when inserting into table having instead of trigger from entity data framework )

This is all we need in the TSQL level. Every time we attempt to INSERT a new record, right after the INSERT request, our code will perform the inserted by computing some additional data.

In our C# code, I’ll assume we’re using an EDMX file and using DB first approach. Thus, we usually only update our DbContext by updating our models from Visual Studio (VS).

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations.Schema;
using System.Data.Entity.Infrastructure;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace AutoGenDB
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var context = new AutoGenDB.AutoGenDBEntities())
            {
                // if master table is empty, then it inserts some master records.
                if (!context.Masters.Any())
                {
                    for (int masterIndex = 0; masterIndex < 5; masterIndex++)
                    {
                        var isAutoGenEnabled = masterIndex % 2 == 0;
                        var newMaster = new Master { IsAutoGen = isAutoGenEnabled, AutoGenPrefix = isAutoGenEnabled ? "GRP" + Convert.ToChar((masterIndex % 65) + 65) : default(string) };
                        context.Masters.Add(newMaster);
                    }
                    context.SaveChanges();
                }

                // Inserts some details on different groups
                List<Detail> details = new List<Detail>();
                List<Master> masters = context.Masters.AsNoTracking().ToList();
                for (int i = 0; i < 100; i++)
                {
                    var masterId = masters[i % masters.Count].Id;
                    details.Add(new Detail() { MasterId = masterId });
                }
                context.Details.AddRange(details);
                context.SaveChanges();

                // Prints 10 auto generated IDS
                foreach (var master in context.Masters.AsNoTracking().Take(3))
                    foreach (var detail in master.Details.Take(5))
                        Console.WriteLine(master.IsAutoGen ? "Ticket:{0}-{1}" : @"N/A", detail.AutoGenPrefix, detail.AutoGenNumber);
            }
            Console.ReadKey();
        }
    }
}

The previous C# code inserts some records and prints the information generated by the Trigger. At this point there is only one missed component.
If we run the code as it is. The application won’t load the auto generated values, this is because Entity Framework does not know that these columns were populated. An option is to instruct Entity Framework about when it should reload the entities. The following code, decorates the Detail class with an IDetail interface which can be used in the DbContext to identify the entity type that should be reloaded.

using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace AutoGenDB
{
    interface IDetail
    {
        int? AutoGenNumber { get; set; }
        string AutoGenPrefix { get; set; }
        int Id { get; set; }
        Master Master { get; set; }
        int MasterId { get; set; }
    }

    public partial class AutoGenDBEntities : DbContext
    {
        public override int SaveChanges()
        {
            var entriesToReload = ChangeTracker.Entries<IDetail>().Where(e => e.State == EntityState.Added).ToList();
            int rowCount = base.SaveChanges();
            if (rowCount > 0 && entriesToReload.Count > 0)
                entriesToReload.ForEach(e => e.Reload());
            return rowCount;
        }

    }

    public partial class Detail : AutoGenDB.IDetail
    {

    }
}

Please notice how the subclass of the DbContext is overriding the SaveChanges method in order to request to reload the instances of IDetail. Reloading entities may or may not be desirable in depending on the application.
With these overrides, the DbContext will know when a given entity should be reloaded.

As usual the code sample can be check at https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.EntityFrameworkAndTriggers

After moving some files my Windows Phone Project Throws a XamlParseException


I am not sure if this is an known issue for Microsoft, but at least I am aware of this. However this is the first time I am gonna document it. Since it can be very frustrating

The Problem

I want to reorganize the files in my Windows Phone Project in order to follow our implementation of MVVM, moreover we have decided that resx file will live in their own assembly project. However, once I move the Resource file to an external assembly the application just thrown an XamlParseException even though the reference in XAML is totally correct.

REMARKS
XamlParseExceptions may be thrown by other reasons. It is important to realize that I knew I moved the RESX file to a different assembly and before that everything was working. Certainly I updated the reference to the new assembly, and the exception was still being thrown. That is why this is a tricky issue.

The solution

It took some time to me to noticed, but somehow the project that contains the RESX file cannot contain a dot (.) character in its assembly name. If it happens then XAML parse will throw a XamlParserException even though the reference to the assembly is correct. The error message may contain something like this:

System.Windows.Markup.XamlParseException occurred
  HResult=-2146233087
  Message=Unknown parser error: Scanner 2147500037. [Line: 10 Position: 126]
  Source=System.Windows
  LineNumber=10
  LinePosition=126
  StackTrace:
       at System.Windows.Application.LoadComponent(Object component, Uri resourceLocator)
       at MyApp.App.InitializeComponent()
       at MyApp.App..ctor()
  InnerException: 

Taking a look at the location where I was loading the Resource it was this:

<Application
    x:Class="MyApp.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:phone="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone"
    xmlns:shell="clr-namespace:Microsoft.Phone.Shell;assembly=Microsoft.Phone">

    <!--Application Resources-->
    <Application.Resources>
        <local:LocalizedStrings xmlns:local="clr-namespace:MyApp;assembly=MyApp.WindowsPhone8.Resources" x:Key="LocalizedStrings"/>
    </Application.Resources>

    <Application.ApplicationLifetimeObjects>
        <!--Required object that handles lifetime events for the application-->
        <shell:PhoneApplicationService
            Launching="Application_Launching" Closing="Application_Closing"
            Activated="Application_Activated" Deactivated="Application_Deactivated"/>
    </Application.ApplicationLifetimeObjects>

</Application>

The line one of my projects generates an assembly named: ‘MyApp.WindowsPhone8.Resources’, in order to resolve the issue I only have to update the generated assembly name to be ‘MyApp_WindowsPhone8_Resources’ and then update the proper reference in the XAML. For example:

        <local:LocalizedStrings xmlns:local="clr-namespace:MyApp;assembly=MyApp_WindowsPhone8_Resources" x:Key="LocalizedStrings"/>

After performing this change your app should work normally.

Pivot ( or Panorama ) Index Indicator


Hello,

The Problem:

Creating a pivot (or panorama) indicator could be a challenging task. The control should:

  • Indicate in which item of the collection you are looking at
  • Let you tap on a given item and navigate in the panorama to that given item
  • Let you customize the look and feel of the items.

The Solution: A Second thought

Sounds like the perfect scenario for a custom control, and in some of the cases it is. However, after some minutes thinking about this idea, I realized that the ListBox supports most of these requirements but only one pending task: it has to interact properly with the Panorama or Pivot.  Thus, the current solution uses a ListBox (Custom Styles and Templates) for modifying the look and file, and prepares a behavior (more specifically a TargetedTriggeredAction<T1,T2> ) for letting the ListBox interact with a index based collection (e.g. Pivot, Panorama, ListBox, etc … ).

A behavior …  (What’s that?)

Well with XAML a bunch of new concepts arrived to .NET development world. One in particular which is very useful is a behavior . You can think about a behavior like a encapsulated functionality that can be reused on different context under the same type of items.  The most popular behavior i guess it is EventToCommand which can be applied to any FrameworkElement and it maps the Loaded Event to a Command when implementing a View Model.

Thus, since We have a ListBox already in place, we only want it to behave synchronized with a Pivot or Panorama. Thus, the external control will be a parameter for our behavior.

    [TypeConstraint(typeof(Selector))]
    public class SetSelectedItemAction : TargetedTriggerAction
    {
        private Selector _selector;

        protected override void OnAttached()
        {
            base.OnAttached();

            _selector = AssociatedObject as Selector;
            if (_selector == null)
                return;
        }

        protected override void OnDetaching()
        {
            base.OnDetaching();
            if (_selector == null)
                return;
            _selector = null;
        }

        protected override void Invoke(object parameter)
        {
            if (_selector == null)
                return;

            if (Target is Panorama)
                InvokeOnPanorama(Target as Panorama);
            if (Target is Pivot)
                InvokeOnPivot(Target as Pivot);
            if (Target is Selector)
                InvokeOnSelector(Target as Selector);
        }

        private void InvokeOnSelector(Selector selector)
        {
            if (selector == null)
                return;
            selector.SelectedIndex = _selector.SelectedIndex;
        }

        private void InvokeOnPivot(Pivot pivot)
        {
            if (pivot == null)
                return;
            pivot.SelectedIndex = _selector.SelectedIndex;
        }

        private void InvokeOnPanorama(Panorama panorama)
        {
            if (panorama == null)
                return;
            panorama.DefaultItem = panorama.Items[_selector.SelectedIndex];
        }

The idea is that you should be able to sync other elements by just dropping this behavior on a ListBox and setting up few properties, for example:

<ListBox
            x:Name="listbox"
			HorizontalAlignment="Stretch" Margin="12" VerticalAlignment="Top"
            SelectedIndex="{Binding SelectedIndex,ElementName=panorama,Mode=TwoWay}"
            ItemsSource="{Binding PanoramaItems}"
            ItemsPanel="{StaticResource HorizontalPanelTemplate}"
            ItemContainerStyle="{StaticResource ListBoxItemStyle1}"
            ItemTemplate="{StaticResource IndexIndicatorDataTemplate}"
            >
            <i:Interaction.Triggers>
                <i:EventTrigger EventName="SelectionChanged">
                    <local:SetSelectedItemAction TargetObject="{Binding ElementName=panorama}"/>
                </i:EventTrigger>
            </i:Interaction.Triggers>
        </ListBox>

The behaviors can take more responsibility other than keep sync the selected item in one way. However my design decision was to use other mechanisms for keeping sync the indexes. (e.g. you could appreciate a TwoWay binding using SelectedIndex Property of the Target Pivot or Panorama).

Also, the ListBox has bound a list of items (empty items) just for being able to generate as much items as the Panorama has. This is other logic that may be moved to the behavior (however in my two previous attempts it didn’t work that well).

Here is the code sample, of this simple behavior and how ti keeps sync with the Panorama.

https://github.com/hmadrigal/playground-dotnet/tree/master/MsWinPhone.ParanoramaIndex

The following screenshot shows an indicator in the top/left corner where you can see the current tab of the Panorama that is being shown. Moreover, the user can tap on a given item to get directly to that selected tab.

Panorama_Index_Indicator

Cheers
Herb

Automatic retries using the Transient Fault Handling from Enterprise libraries (EntLib)


Hello,

I was reading about the Transient Fault Handling at http://msdn.microsoft.com/en-us/library/hh680901(v=pandp.50).aspx This is a block of the Enterprise Libraries which is part of the Enterprise Integration Pack for Windows Azure. I haven’t played much with Azure yet, but one part of the documentation took my attention:

When Should You Use the Transient Fault Handling Application Block?
…… (some explanation about when you are using Azure services) ……
You Are Using a Custom Service
If your application uses a custom service, it can still benefit from using the Transient Fault Handling Application Block. You can author a custom detection strategy for your service that encapsulates your knowledge of which transient exceptions may result from a service invocation. The Transient Fault Handling Application Block then provides you with the framework for defining retry policies and for wrapping your method calls so that the application block applies the retry logic.Source: http://msdn.microsoft.com/en-us/library/hh680901(v=pandp.50).aspx#sec10

So, I thought why do I have to write always the retry logic if this is already done ;-).

The problem

We want to write a generic retry mechanism for our application. Moreover we need to support some level of customization thru configuration files.

The solution
Simple let set up the Transient Fault Handling Block (from now on FHB). First of fall you will have to download the libraries, you could do it using NuGet – which by default adds also references to support Windows Azure -. You can remove the Windows Azure DLL if you’re not going to be using Windows Azure in your app.

The FHB lets you specify a error detection strategy as well as retry strategy when you construct the retry policy. Also, a Retry policy support the execution of Action, Funcdelegates or Actionfor async retries.

As part of the configuration process for FHB, it’s required to indicate which errors should be handled by FHB. A mechanism for doing this is by implementing the interface: ITransientErrorDetectionStrategy. In our sample, we only want to retry when a FileNotFoundException is thrown. Thus we have coded the FileSystemTransientErrorDetectionStrategy:

using System;
using System.IO;
using Microsoft.Practices.TransientFaultHandling;

namespace SampleConsoleApplication.TransientErrors.Strategies
{
    public class FileSystemTransientErrorDetectionStrategy : ITransientErrorDetectionStrategy
    {
        #region ITransientErrorDetectionStrategy Members

        public bool IsTransient(Exception ex)
        {
            return ex is FileNotFoundException;
        }

        #endregion
    }
}

Certainly you could implement it by handling any particular exception.
Now we need to define a retry policy. This is done by specifying a retry strategy. The following code example configures a retry policy by providing a retry strategy incremental. Additionally in our sample service.DoSlowAndImportantTask will always fail, so the FHB will retry automatically based on the retry policy.

 private static void RetryPolityUsingCode(IUnityContainer container, IService service, OutputWriterService writer)
        {
            writer.WriteLine("Begin sample: RetryPolityUsingCode");
            // Define your retry strategy: retry 5 times, starting 1 second apart
            // and adding 2 seconds to the interval each retry.
            var retryStrategy = new Incremental(5, TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(2));

            // Define your retry policy using the retry strategy and the Windows Azure storage
            // transient fault detection strategy.
            var retryPolicy = new RetryPolicy(retryStrategy);

            try
            {
                // Do some work that may result in a transient fault.
                retryPolicy.ExecuteAction(service.DoSlowAndImportantTask);
            }
            catch (Exception exception)
            {
                // All the retries failed.
                writer.WriteLine("An Exception has been thrown:\n{0}", exception);
            }
            writer.WriteLine("End sample: RetryPolityUsingCode");
        }

Additionally it’s possible to specify the retry policy in the configuration file, for example:

<!--?xml version="1.0" encoding="utf-8" ?-->

<section name="RetryPolicyConfiguration" type="Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling.Configuration.RetryPolicyConfigurationSettings, Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling" requirepermission="true"></section><section name="typeRegistrationProvidersConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Common.Configuration.TypeRegistrationProvidersConfigurationSection, Microsoft.Practices.EnterpriseLibrary.Common"></section>

Please notice that there is a default Retry Strategy indicating that will use “Fixed Interval Retry Strategy”. Our C# code will ask for the RetryManager instance in order to retrieve the Retry Policy and then perform the task.

private static void RetryPolityUsingConfigFile(IUnityContainer container, IService service, OutputWriterService writer)
        {
            writer.WriteLine("Begin sample: RetryPolityUsingConfigFile");
            // Gets the current Retry Manager configuration
            var retryManager = EnterpriseLibraryContainer.Current.GetInstance();

            // Asks for the default Retry Policy. Keep on mind that it's possible to ask for an specific one.
            var retryPolicy = retryManager.GetRetryPolicy();
            try
            {
                // Do some work that may result in a transient fault.
                retryPolicy.ExecuteAction(service.DoSlowAndImportantTask);
            }
            catch (Exception exception)
            {
                // All the retries failed.
                writer.WriteLine("An Exception has been thrown:\n{0}", exception);
            }
            writer.WriteLine("End sample: RetryPolityUsingConfigFile");
        }

The Transient Fault Handling Block a simple and flexible alternative for a well-known task. A automatic retry system, moreover this supports configuration file, so we should be able to change the retry policy just by modifying our configuration file.

Here is an screen shot of the app, which automatically retries calling our mischievous method.

Transient Fault Handling Tries Output Capture

As usual the code sample is at https://github.com/hmadrigal/playground-dotnet/tree/master/MsEntLib.TransientErrorHandling

Multiple configuration files per environment


Hello,

Today I had the chance to perform a quick research about a classic problem. If have different settings for development, testing, staging and production. So, we usually merge or lost settings of the configuration files. Visual Studio 2008 have already resolved the problem (at least in web applications) by allowing the user to create different web.config files and applying XSLT to generate the final configuration file. Today I’ll propose the same solution, but for desktop Apps, and hopefully you could apply this solution to any XML file.

Just in case you’re one of those who want to download a sample, you coudl get it at https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.MultipleConfigurationFile

The Problem:

We have a configuration file ( .config or any xml document), and we want to have different settings per environment (development, testing, staging and production). Moreover we want to automate this process, so we don’t have to write the settings manually.

Remarks

  • It’s good to know basics of MSBuild and how to modify project.
  • It’s good to know about XML, XPath and XSLT

The solution

Create one configuration per environment 

Let’s start by creating configuration settings for each environment where you need custom settings. You could do it by click on “Configuration Manager…“. For example I’ve created the environments that we’re gonna use on this example.

Image

Now, add a configuration file (in our case app.config) and also add one config file per environment. Please note that the configuration file app.config contains most of the final content, and the other configuration files will hold only XSLT expressions to modify the original. As a convention usually the file names are: app.$(environment).config. The following image illustrates the configuration files:

Image

More in detail app.config, contains three simple settings:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <appSettings>
    <add key="EnvironmentName" value="Debug" />
    <add key="ApplicationDatabase" value="Data Source=DevelopmentServer\SQLEXPRESS;Initial Catalog=DevelopmentDatabase;Integrated Security=True;" />
    <add key="ServerSharedPath" value="\\127.0.0.1" />
  </appSettings>
</configuration>

Set up your project to use a XslTransform task once the compilation is done

Now you will have to edit the project file using a text editor. VS project files are XML, these normally are read by VS and see the project structure. However there are tasks that only can be customized by editing manually the XML file. You can modify the xml of a project within Visual Studio by using right (secondary) click on the project from the Solution Explorer. Then, click on “Unload project”, and once again right click and now select “Edit project …”. Now, you should be able to see the XML associated to the selected project.

You’ll be adding two targets (or if modifying if any of them is already in place). The following code:

<Target Name="ApplySpecificConfiguration" Condition="Exists('App.$(Configuration).config')">
    <XslTransformation XmlInputPaths="App.config" XslInputPath="App.$(Configuration).config" OutputPaths="$(OutputPath)$(RootNamespace).$(OutputType).config" />
    <Copy SourceFiles="$(OutputPath)$(RootNamespace).$(OutputType).config" DestinationFiles="$(OutputPath)$(RootNamespace).vshost.$(OutputType).config" />
  </Target>
  <Target Name="AfterBuild">
    <CallTarget Targets="ApplySpecificConfiguration" />
  </Target>

The previous XML portion Adds a custom task called and it uses MSBuild variables to determine the name of the configuration file, our example is using a Console App, thus we need to create a copy of the config file but with vshost.config extension, so we added a copy task.  The second thing we’ve added is a call to the task we’ve just created, this call is added into the AfterBuild Target. The AfterBuild is invoked by MSBuild automatically when the build process has finished.

Transform your app.config with XSLT

On this section, we’re gonna write the content for each configuration. This will let us custom XML porting into the final configuration file. As example, here is the XML for the production transformation:

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl">
  <xsl:output method="xml" indent="yes"/>

  <!-- Nodes to be replaces -->
  <xsl:template match="/configuration/appSettings/add[@key='EnvironmentName']">
    <add key="EnvironmentName" value="Production" />
  </xsl:template>
  <xsl:template match="/configuration/appSettings/add[@key='ApplicationDatabase']">
    <add key="ApplicationDatabase" value="Data Source=PRODUCTION\SQLEXPRESS;Initial Catalog=ProductionDatabase;Integrated Security=True;" />
  </xsl:template>
  <xsl:template match="/configuration/appSettings/add[@key='ServerSharedPath']">
    <add key="ServerSharedPath" value="\\PRODUCTION" />
  </xsl:template>

  <!-- Copy all the remaining nodes -->
  <xsl:template match="node()|@*">
    <xsl:copy>
      <xsl:apply-templates select="node()|@*"/>
    </xsl:copy>
  </xsl:template>
</xsl:stylesheet>

I know XSLT might sound intimidating at first, but it’s kinda easy.BTW XSLT uses XPath, so if you want to know the basics of them, please check out w3c school for a quick reference. XPath at http://www.w3schools.com/xpath/xpath_examples.asp and XSLT at http://www.w3schools.com/xsl/xsl_transformation.asp

To put it in a nutshell, our XSLT is detecting specifically three attributes (by its name and location into the XML) and replacing them with a custom value. All the remaining nodes are kept into the final XML.

Let run our sample

Just to make it work I’ve coded two classes, one that defines the constant name of the settings and a main class which print values from the configuration file.

using System;
using System.Configuration;

namespace MultipleConfigurationFilesSample
{
    public static class ApplicationConstants
    {
        public class AppSettings
        {
            public const string ServerSharedPath = @"ServerSharedPath";
            public const string EnvironmentName = @"EnvironmentName";
            public const string ApplicationDatabase = @"ApplicationDatabase";
        }
    }
    class Program
    {
        static void Main(string[] args)
        {
            var environment = ConfigurationManager.AppSettings[ApplicationConstants.AppSettings.EnvironmentName];
            var serverSharedPath = ConfigurationManager.AppSettings[ApplicationConstants.AppSettings.ServerSharedPath];
            Console.WriteLine(@"Environment: {0} ServerSharedPath:{1}", environment, serverSharedPath);
            Console.ReadKey();
        }
    }
}

Now I have run the application twice, but using different configurations. The first was using development and I got the following output:

Image

The second was using Production configuration, and it produces the following output:

Image

As you can see, the selected environment is determining the settings into the resultant configuration file.

As usual the source code can be fond at https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.MultipleConfigurationFile

References:
References: http://litemedia.info/transforming-an-app-config-file

Best regards,
Herber

How to create a plug in using C# .NET (4.0) and MEF


Hello,

Long time ago I wrote my approach of how to write a plugin (add in) using Microsoft Unity (http://unity.codeplex.com/). In my old approach (https://hmadrigal.wordpress.com/2010/03/28/writing-a-plug-in/) was using the unity.config file to load dynamically the plugins. Certainly the major issue with this approach it’s that you will have to update your unity.config file every time you add a new plugin.

As usual there is a code sample of all this at:
https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.PluginWithMEF The code runs in MonoDevelop as well as in Visual Studio.

On this post I’ll explain how to easily use Microsoft Extensibility Framework (MEF) http://msdn.microsoft.com/en-us/library/dd460648.aspx to load plugins from a directory. Additionally MEF is a framework to create plugins and extensible applications, so it supports more operations other than loading types from external assemblies.

The problem

I want to load different language implementations provided in different assemblies. So, each DLL might (or might not) contain an implementation of a supported language. Moreover, the assemblies can be implemented by third party members, so I cannot binding them into my project at compile time.

The Solution

Defining what on my application needs to be extensible
Define a common contract in order to identify a class which provided a supported language. Moreover all the “extensible” portion of your application should be encapsulated into a simple assembly that could be redistributed to other developers who wants to implement the plugin.
BTW, when implementing plug ins, it’s normal to define a contract (or interface) which defines the required functionality.I’ll be using the word “part” when I’m talking about contract or interfaces.
The project MefAddIns.Extensibility defines an interface which can be used to provide a supported language.

namespace MefAddIns.Extensibility
{
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    /// <summary>
    /// Defines the functionality that third parties have to implement for my language plug in
    /// </summary>
    public interface ISupportedLanguage
    {
        string Author { get; }
        string Version { get; }
        string Description { get; }
        string Name { get; }
    }
}

Implementing the plug in
Now it’s the time to provide implementation of supported languages. Each project of MefAddIns.Language.English, MefAddIns.Language.Japanese and MefAddIns.Language.Spanish provides at least one implementation for the interface ISupportedLanguage. These projects could be developed internally or by third party people. An example of this implementation is:

namespace MefAddIns.Language.English
{
    using MefAddIns.Extensibility;
    using System.ComponentModel.Composition;

    /// <summary>
    /// Provides an implementation of a supported language by implementing ISupportedLanguage.
    /// Moreover it uses Export attribute to make it available thru MEF framework.
    /// </summary>
    [Export(typeof(ISupportedLanguage))]
    public class EnglishLanguage : ISupportedLanguage
    {
        public string Author
        {
            get { return @"Herberth Madrigal"; }
        }
        public string Version
        {
            get { return @"EN-US.1.0.0"; }
        }
        public string Description
        {
            get { return "This is the English language pack."; }
        }
        public string Name
        {
            get { return @"English"; }
        }
    }
}

Please note that the attribute Export indicates to MEF framework that the class EnglishLanguage can be used when a ISupportedLanguage is requested. In other words, if any other application is looking for a part ISupportedLanguage then MEF framework will suggest EnglishLanguage. In further steps we are going to create a MEF catalog which is the mechanism to associate exported parts with the members which need to import parts.

Defining the parts that my application needs to import
The simplest way of doing this with MEF is by using the import attribute. On this case I’ll be using ImportMany because of I’ll be expecting many (0 or more) implementations of the ISupportedLanguage interface. For example:

namespace MefAddIns.Terminal
{
    using MefAddIns.Extensibility;
    using System.Collections.Generic;
    using System.ComponentModel.Composition;

    public class Bootstrapper
    {
        /// <summary>
        /// Holds a list of all the valid languages for this application
        /// </summary>
        [ImportMany]
        public IEnumerable<ISupportedLanguage> Languages { get; set; }

    }
}

Composing your objects by using (MEF) catalogs
Before getting into details I’ll like to point out some concepts:

Part: It’s the portion of our code that can be exported or imported.
Export: It’s the fact of indicating that a portion of our code is available and it fits a set of traits. E.g. EnglishLanguage implements the ISupportedLanguage Interface, thus I can add the attribute Export(typeof(ISupportedLanguage)).
Import: It’s the fact of indicating that a portion of our code requires a part that fits a set of traits. E.g. I need to import a give language, then I define a property of ISupportedLanguage type and set the attribute Import(typeof(ISupportedLanguage)).
Catalog: It a source of Exports and Import definitions. It could be the current execution context or a set of binaries files (such as dlls, exes).
Container: It’s an module which deals with class instantiation. This item uses the catalog when needed in order to create the instances during the composition process.
Composition: It’s the fact of taking an object and making sure that all its needs regarding imports are fulfilled. Certainly this will depends on the catalogs and the export and imports that are being used on the catalog.

Now, that we now more about come MEF concepts. Lets talk more about MEF in our example. MEF detects that some code is Exporting parts, and other code is Importing. The process of loading the parts is made by creating a catalog. There are many ways of creating catalogs, on my example is using a DictionaryCatalog. The DictionaryCatalog uses a directory to look for files, and it creates a catalog with all the exported parts from the assemblies (or files) of the given directory. Finally our application creates an instance of the Bootstrapper class, this instance indicates that it requires all the possible instances of ISupportedLanguages that are into the DictionaryCatalog. Here is a sample of this:

namespace MefAddIns.Terminal
{
    using System.ComponentModel.Composition.Hosting;
    using System.ComponentModel.Composition;
    using System;

    class Program
    {
        static void Main(string[] args)
        {
            var bootStrapper = new Bootstrapper();

            //An aggregate catalog that combines multiple catalogs
            var catalog = new AggregateCatalog();
            //Adds all the parts found in same directory where the application is running!
            var currentPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetAssembly(typeof(Program)).Location);
            catalog.Catalogs.Add(new DirectoryCatalog(currentPath));

            //Create the CompositionContainer with the parts in the catalog
            var _container = new CompositionContainer(catalog);

            //Fill the imports of this object
            try
            {
                _container.ComposeParts(bootStrapper);
            }
            catch (CompositionException compositionException)
            {
                Console.WriteLine(compositionException.ToString());
            }

            //Prints all the languages that were found into the application directory
            var i = 0;
            foreach (var language in bootStrapper.Languages)
            {
                Console.WriteLine("[{0}] {1} by {2}.\n\t{3}\n", language.Version, language.Name, language.Author, language.Description);
                i++;
            }
            Console.WriteLine("It has been found {0} supported languages",i);
            Console.ReadKey();
        }

    }
}

Now that we have all that our application requires, we’re able to run the app and see the following output:

Example of the output where it lists all the supported languages

Hey It’s not printing anything!!!
Ok. There is an additional thing to talk, which it’s not strictly related to MEF. As your might noticed our sample application does not have references to any of the MefAddIns.Language.* Dlls but it’s running code from them. Here is the deal, since our application uses MEF we don’t need to reference directly these DLLS, we just need to copy them into the directory where the DirectoryCatalog is going to look for them. The code sample looks into the directory where the application is running, so we have configured our main csproj to copy the DLLS to the output directory of our main application.
To keep it simple, you can do it in two steps 😉
1) Add the external DLLs into your project. First, Right click on your main project, and click on “Add existing item…”. Look for the DLL, and select “Add as a Link”. The configure the properties of the added file to copy always or copy when newer.
2) Update the csproj file. Right click on your project, and the select “Unload project”. Then once again right click on your project and select “Edit xxxxx.csproj”. This will open the project as an XML file. Look for the added DLL, and replace part of portions of the path for MSBuild variables such as $(Configuration). Note that there are more options to do this, such as using the Copy task from MSBuild (see http://stackoverflow.com/questions/266888/msbuild-copy-output-from-another-project-into-the-output-of-the-current-project ). In our sample this was enough because we are looking for DLLs into the current directory, but depending on your file structure this won’t be recommended.

<ItemGroup>
    <Content Include="
..\MefAddIns.Language.English\bin\$(Configuration)\MefAddIns.Language.English.dll;
..\MefAddIns.Language.Japanese\bin\$(Configuration)\MefAddIns.Language.Japanese.dll;
..\MefAddIns.Language.Spanish\bin\$(Configuration)\MefAddIns.Language.Spanish.dll;">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <Visible>false</Visible>
    </Content>
  </ItemGroup>

At last comment, this is the most basic sample of using MEF to create a plugin-based structure. There are more things to consider , e.g. the path where your DLLs are going to be placed or how your main application is going to exchange information with the plugins. MEF has also more options to create extensible applications. Also, it’s important to consider when it’s good to create plugin based architectures rather that tightly coupled apps.

EDIT In my last update I modified the project (csproj) to make it run on MonoDevelop ;-). The major change is that some tasks on MSBuild aren’t available in XBuild. So, the last step which modified the csproject, now looks like:

<ItemGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
    <Content Include="..\MefAddIns.Language.English\bin\Debug\MefAddIns.Language.English.dll">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <Visible>False</Visible>
    </Content>
    <Content Include="..\MefAddIns.Language.Japanese\bin\Debug\MefAddIns.Language.Japanese.dll">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <Visible>False</Visible>
    </Content>
    <Content Include="..\MefAddIns.Language.Spanish\bin\Debug\MefAddIns.Language.Spanish.dll">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <Visible>False</Visible>
    </Content>
  </ItemGroup>
  <ItemGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
    <Content Include="..\MefAddIns.Language.English\bin\Release\MefAddIns.Language.English.dll">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <Visible>False</Visible>
    </Content>
    <Content Include="..\MefAddIns.Language.Japanese\bin\Release\MefAddIns.Language.Japanese.dll">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <Visible>False</Visible>
    </Content>
    <Content Include="..\MefAddIns.Language.Spanish\bin\Release\MefAddIns.Language.Spanish.dll">
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
      <Visible>False</Visible>
    </Content>
  </ItemGroup>

The code sample for this application is at:
https://github.com/hmadrigal/playground-dotnet/tree/master/MsDotNet.PluginWithMEF

Best regards,
Herber

Unity 2 Setting up the Chain of Responsability pattern and using a configuration file with Mono (monodevelop)


Unity 2 Setting up the Chain of Responsability pattern and using a configuration file with Mono (monodevelop)

The Chain of Responsibility pattern helps to accomplish two main goals when developing software. It keeps very specific
functionality into separate modules, and by using unity we can decide easily plug and unplug functionality.

If you haven’t heard about Chain of Responsibility pattern see http://en.wikipedia.org/wiki/Chain-of-responsibility_pattern, on the other hand, if you haven’t heard about Microsoft Unity 2 Block (an IoC or dependency injection framework, please check http://unity.codeplex.com/.

Now, lets get started with a simple sample.

1. Get Microsoft Unity 2 Block
I’m going to use mono just to show that it’s possible to use Unity on Mono ;-). So, first of all download Unity Source code from codeplex site at http://unity.codeplex.com/SourceControl/list/changesets# . Once you download the source code, you can remove tests and the compile the project. it should compile properly. (BTW I’m using Mono 2.4)

2. In our sample, the application uses repositories, to get and store user data. Our sample has create a project per layer, and these projects communicate by implementing the interfaces (contracts) from the Infrastructure project. Lets see some notes about each layer.

  • “Data” project will retrieve real data from the data base.
  • “Dummy” project will produce dummy data.
  • “Cache” project will perform cache tasks, and then delegate the data retrival tasks to a given sucessor.
  • “Logging” project will perform logging tasks, and then delegate the data retrival to a give successor.
  • “Infrastructure” project will define the interface that is required for the UserRepository. So all projects knows what is the expected input and output for the UserRepository. Additionally it has been define a ISuccessor interface in order to identify layers that require a successor (for example Logging, Cache).

For the sake of the sample, the IUserRepository defines only one operation GetUserFullName. In this opertion, each implementation will add a text indicating that it has been performed a call to each given layer. By using unity 2 we will be able to indicate which levels will be loaded and executed through a configuration file.

using System;
namespace ChainOfResponsability.Infrastructure.Repository
{
	public interface IUserRepository
	{
		string GetUserFullName(int id);
	}
}

This code is sample of the contract that each different layer is going to implement if they want to be part of the chain for IUserRepository.
Additionally, ISuccessor is defined as following:

using System;
namespace ChainOfResponsability.Infrastructure
{
	public interface ISuccessor<T>
	{
		T Successor {get;}
	}
}

This generic interface allows to indicate a successor to perform the next call in the chain.

3. The “Logging” and “Cache” projects will implement IUserRepository and ISuccessor interfaces, and additionally they will require the successor as parameter of the constructor in order to delegate the task itself. Because of Unity 2 can only resolve one (anonymous) mapping for a given type (always into the same container). My colleague Marvin (aka Murven ) suggested a workaround. Basically, each layer which implements ISuccessor will receive in its constructor the IUnityContainer which will resolve the contract, but instead of using the given container, it will use the parent to resolve the type. So, it could get a different resolution for the same given type.

using System;
using ChainOfResponsability.Infrastructure.Repository;
using ChainOfResponsability.Infrastructure;
using ContainerExtensions = Microsoft.Practices.Unity.UnityContainerExtensions;
using Microsoft.Practices.Unity;
namespace ChainOfResponsability.Logging.Repository
{
	public class LoggingUserRespository : IUserRepository, ISuccessor<IUserRepository>
	{
		public LoggingUserRespository (IUnityContainer container)
		{
			this.Successor = ContainerExtensions.Resolve<IUserRepository> (container.Parent);
		}

		#region IUserRepository implementation
		public string GetUserFullName (int id)
		{
			return string.Format ("{0}->{1}", this.GetType ().Name, this.Successor.GetUserFullName (id));
		}
		#endregion

		#region ISuccessor[IUserRepository] implementation
		public IUserRepository Successor { get; set; }
		#endregion

	}
}
using System;
using ChainOfResponsability.Infrastructure;
using ChainOfResponsability.Infrastructure.Repository;
using Microsoft.Practices.Unity;
using ContainerExtensions = Microsoft.Practices.Unity.UnityContainerExtensions;
namespace ChainOfResponsability.Cache.Repository
{
	public class CacheUserRepository : IUserRepository, ISuccessor<IUserRepository>
	{
		public CacheUserRepository (IUnityContainer container)
		{
			this.Successor = ContainerExtensions.Resolve<IUserRepository> (container.Parent);
		}

		#region IUserRepository implementation
		public string GetUserFullName (int id)
		{
			return string.Format ("{0}->{1}", this.GetType ().Name, this.Successor.GetUserFullName (id));
		}
		#endregion

		#region ISuccessor[IUserRepository] implementation
		public IUserRepository Successor { get; set; }
		#endregion
	}
}

Even though both files owns a similar structure, each file can add a very specific functionality (caching or logging) in the previous example, and the invoke the proper successor.

4. The “Data” and “Dummy” projects will implement IUserRepository only and they will implement the task itself.

using System;
using ChainOfResponsability.Infrastructure;
using ChainOfResponsability.Infrastructure.Repository;
namespace ChainOfResponsability.Dummy.Repository
{
	public class DummyUserRepository : IUserRepository
	{
		public DummyUserRepository ()
		{
		}

		#region IUserRepository implementation
		public string GetUserFullName (int id)
		{
			return string.Format ("{0}[{1}]", this.GetType ().Name, id);
		}
		#endregion
	}
}
using System;
using ChainOfResponsability.Infrastructure;
using ChainOfResponsability.Infrastructure.Repository;
namespace ChainOfResponsability.Data.Repository
{
	public class DataUserRepository : IUserRepository
	{
		public DataUserRepository ()
		{
		}

		#region IUserRepository implementation
		public string GetUserFullName (int id)
		{
			return string.Format ("{0}[{1}]", this.GetType ().Name, id);
		}
		#endregion
	}
}

In the case of “Data” and “Dummy” layers, they don’t need a successor so they can perform the task itself and return the proper result.

6. There is an additional step, which is to load all the containers from the configuration file. Basically it’s done by looping through all the containers and asking for a new child on each iteration. So, we can create a sort of chain of containers where each one owns a different type resolution for the save given type. I’ve created an small utility which load the different containers and create a chain (container and parents).

        public static IUnityContainer container;
	private static void UnityBootStrap()
        {
            container = new UnityContainer();
            UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");

            #region Mappings based on the configuration file
            IUnityContainer currentContainer = container;
            foreach (var containerSection in section.Containers)
            {
                var nestedContainer = currentContainer.CreateChildContainer();
                Microsoft.Practices.Unity.Configuration.UnityContainerExtensions.LoadConfiguration(nestedContainer, section, containerSection.Name);
                currentContainer = nestedContainer;
            }
            container = currentContainer;
            #endregion
        }

This code loads the configuration file of Unity, and creates a containers which has a parent, and its parent has a different parent and so on. In this way we can create a chain with different mappings to the same IUserRepository contract. Additionally the following configuration file has been defined:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration"/>
  </configSections>
  <unity xmlns="http://schemas.microsoft.com/practices/2010/unity">
  	<!-- Defines some aliast to easily manipulate the mappings -->
    <alias alias="IUserRepository" type="ChainOfResponsability.Infrastructure.Repository.IUserRepository, ChainOfResponsability.Infrastructure" />
    <alias alias="DataUserRepository" type="ChainOfResponsability.Data.Repository.DataUserRepository, ChainOfResponsability.Data" />
    <alias alias="DummyUserRepository" type="ChainOfResponsability.Dummy.Repository.DummyUserRepository, ChainOfResponsability.Dummy" />
    <alias alias="CacheUserRepository" type="ChainOfResponsability.Cache.Repository.CacheUserRepository, ChainOfResponsability.Cache" />
    <alias alias="LoggingUserRespository" type="ChainOfResponsability.Logging.Repository.LoggingUserRespository, ChainOfResponsability.Logging" />

    <!-- Default Container when creating the tree-chain of resolution-->
    <container>
    </container>
    <!-- lowest level of the resolution chain -->
    <container name="DummyContainer">
      <register type="IUserRepository" mapTo="DummyUserRepository">
        <lifetime type="ContainerControlledLifetimeManager" />
      </register>
    </container>
    <container name="DataContainer">
      <register type="IUserRepository" mapTo="DataUserRepository">
        <lifetime type="ContainerControlledLifetimeManager" />
      </register>
    </container>
    <!-- Middle tiers. They can be added as many as needed, while they implement the proper interface -->
    <!-- Concrete implementations are chained because they're properly implementing ISuccessor interface -->
    <container name="CacheContainer">
      <register type="IUserRepository" mapTo="CacheUserRepository">
        <lifetime type="ContainerControlledLifetimeManager" />
      </register>
    </container>
    <!-- Chain of excution starts here! -->
    <container name="LoggingContainer">
      <register type="IUserRepository" mapTo="LoggingUserRespository">
        <lifetime type="ContainerControlledLifetimeManager" />
      </register>
    </container>
  </unity>
</configuration>

7. Because of we only have a common reference between projects (technically all projects are referencing “Infrastructure”, and some projects are referencing Unity. The final project does not need to reference all the implementations, but we have to copy the DLLs of each layer before running the application. If not, we will receive a message like:
The type name or alias …. could not be resolved. Please check your configuration file and verify this type name.” Assuming that you’ve written correctly all the types and DLLs names, then this error appears because the application couldn’t find the DLLs required to load the specified type. So, remember to copy the DLLs when running the application.

		public static void Main (string[] args)
		{
			UnityBootStrap();
			//var userRepository = Microsoft.Practices.Unity.UnityContainerExtensions.Resolve<IUserRepository>(container);
			var userRepository =  ContainerExtensions.Resolve<IUserRepository>(container);
			Console.WriteLine ("Call Stack of Layers when invoking GetUserFullName(0):\n {0} ", userRepository.GetUserFullName(0));
		}

At last but not least you can modify the configuration file to dynamically chance the layers that will be executed without changing your code. For example our first call will produce”

Calling our Sample with the Data Layer

And we can replace the “Data” layer with the “Dummy” layer just by commenting the “Data” layer definition into the configuration file.

Calling with Dummy Layer

Certainly this could open the door to more complex tasks and scenarios. The full source code of this sample has been hosted in GitHub at
https://github.com/hmadrigal/playground-dotnet/tree/master/MsUnity.ChainOfResponsability

Best regards,
Herber