Thursday, April 27, 2017

Auto generating core OData v4 controllers from an entity framework code first model

Even though I am no great fan of OData (leaky abstractions and all), I found myself in the position of thinking how I could make it work with core and entity framework core (there are many posts around that say it cannot be done).

The project that eventuated from these thoughts is on Github.

I fell back on T4 again, as especially for REST API's created with OData in the world, you'll likely have an entity framework code first model. And writing controllers and repositories by hand for such a well documented protocol as OData seems rather tedious - and ripe for automation.

In summary, interrogating a DbContext, any exposed DbSet<> objects would represent resource collections; from there, the entity type of the DbSet<> has properties that may or may not be exposed as parts of the API, as well as navigation properties that may also be exposed.

The project as it stands now uses:
  • Microsoft.AspNetCore.OData.vNext 6.0.2-alpha-rtm as the OData framework
  • Visual Studio 2017
  • EntityFrameworkCore 1.1.1
  • core 1.1.1
  • core Mvc 1.1.2 
And generates:
  • OData v4 controllers for each resource collection
  • Repositories for each entity type that is exposed 
  • Proxies for each repository, that intercept pre and post events for CUD actions, and allow for optional delegation to user specified intervention proxies
Attributes are also implemented that allow the generation process to be modified, examples:

Attribute Semantics
ApiExclusion Exclude a DbSet<> from the API
ApiResourceCollection Supply a specific name to a ResourceCollection
ApiExposedResourceProperty Expose a specific entity property as a resource property
ApiNullifyOnCreate Request that property be nullified when the enclosing object is being created

From the example EF project, below is a DbContext marked up with attributes as desired by the author, excluding a couple of resource collections and renaming one:

 public class CompanyContext : DbContext, ICompanyContext {  
   public CompanyContext(DbContextOptions<CompanyContext> options) : base(options) {  
   public DbSet<Product> Products { get; set; }  
   public DbSet<Campaign> Campaigns { get; set; }  
   public DbSet<Supplier> Suppliers { get; set; }  
   [ApiResourceCollection(Name = "Clients")]  
   public DbSet<Customer> Customers { get; set; }  
   public DbSet<Order> Orders { get; set; }  
   public DbSet<OrderLine> OrderLines { get; set; }  

And likewise, for the Customer entity, some markup that exposes some properties as first class 'path' citizens of an API, and ensures that one must be null when an object of type customer is being created via the API:

 public class Customer {  
     public Customer() {  
       Orders = new List<Order>();  
     public int CustomerId { get; set; }  
     public string Name { get; set; }  
     public virtual ICollection<Order> Orders { get; set; }  

The OData controller generated for the Customers resource collection (which has been renamed 'Clients' by attribute usage) is:

 [EnableQuery(Order = (int)AllowedQueryOptions.All)]  
 public class ClientsController : BaseController<ICompanyContext, EF.Example.Customer, System.Int32, IBaseRepository<ICompanyContext, EF.Example.Customer, System.Int32>> {  
   public ClientsController(IBaseRepository<ICompanyContext, EF.Example.Customer, System.Int32> repo) : base(repo) {  
   public async Task<IActionResult> GetName(System.Int32 key) {  
     var entity = await Repository.FindAsync(key);  
     return entity == null ? (IActionResult)NotFound() : new ObjectResult(entity.Name);  
   public async Task<IActionResult> GetOrders(System.Int32 key) {  
     var entity = await Repository.FindAsync(key, "Orders");  
     return entity == null ? (IActionResult)NotFound() : new ObjectResult(entity.Orders);  

The included BaseController performs most of the basic actions required. And then there is the repository generated, with again, a base type doing most of the useful work:

 public partial class ClientsRepository : BaseRepository<ICompanyContext, EF.Example.Customer, System.Int32>, IBaseRepository<ICompanyContext, EF.Example.Customer, System.Int32> {  
     public ClientsRepository(ICompanyContext ctx, IProxy<ICompanyContext, EF.Example.Customer> proxy = null) : base(ctx, proxy) {  
     protected override async Task<EF.Example.Customer> GetAsync(IQueryable<EF.Example.Customer> query, System.Int32 key) {  
       return await query.FirstOrDefaultAsync(obj => obj.CustomerId == key);  
     protected override DbSet<EF.Example.Customer> Set { get { return Context.Customers; } }  
     public override System.Int32 GetKeyFromEntity(EF.Example.Customer e) {  
       return e.CustomerId;  

Wednesday, April 19, 2017

Text template engine for generating content

I implemented this a while ago for the startup, been meaning to publish it to github, and now have - here. It was used to support a multitude of text templates that had to be transformed to become part of email message content - and a way to do that which was flexible was required.

The idea is trivial, treat a stream of bytes/characters as containing substitution tokens and rewrite those tokens using a supplied context. Also includes iteration, expression support, context switching and a few other minor aspects.

It's a VS 2017, C# solution, targeting .netcoreapp 1.1 and .net 4.6+.

Once set up, transformation is ultra trivial, assuming a text template and some domain object being supplied to the pro forma method shown below:

        private string GenerateMessage<TObject>(string doc, TObject ctx) {
            ParseResult res = Parser.Parse(doc);
            EvaluationContext ec = EvaluationContext.From(ctx);
            var ctx = res.Execute(ExecutionContext.Build(ec));
            return ctx.context.ToString();

Tuesday, January 3, 2017

REST API with a legacy database (no foreign keys!) - a T4 and mini DSL solution

Encountered yet again; a legacy database, with no foreign keys, hundreds of tables, that formed the backbone of a REST API (CRUD style), supporting expansions and the like.

It wasn't that there were not identifiable relationships between objects, just the the way the database was generated meant that they were not captured in the database schema. But expansions had to be supported in an OData like fashion. So, assuming you had a resource collection called Blogs, and each Blog object had sub resources of Author and Readers, you should be able to issue a request like the following for a Blog with an id of 17:


and expect a response to include expansions for Author and Readers.

That's easy then. Just use entity framework mapping/fluent API to configure an independent association with any referenced objects. Well, that can work and often does. But it does not cope well when some selected OData abstractions are included in the mix - and I was using these to expose some required filtering capabilities to consumers of the REST API. Simply put, when you were creating an ODataQueryContext using the ODataConventionModelBuilder type, independent associations cause it to implode in a most unpleasant fashion.

So, if I can't use independent associations, and each resource may have 1..n associations which are realised using joins, I can:
  • Write a mega query that always returns everything for a resource, all expansions included
  • Write specific queries by hand for performance reasons, as the need arises
  • Generate code that map expansions to specific implementations
Writing by hand was going to be tedious, especially as some of the resources involved had 4 or more expansions.

When I thought about the possible expansions for a resource, and how they can be associated to that resource, using non FK joins, it became apparent that I was dealing with a power set of possibilities.

For the Blogs example, with expansions of Author and Readers, I'd have the power set:

{ {}, {Author}, {Readers} {Author, Readers} }

So the idea that formed was:
  • Use a mini DSL to capture, for a resource, its base database query, and how to provision any expansions
  • Process that DSL to generate pro forma implementations
  • Use a T4 text template to generate C# code
I solved this in one way for a client, but then completely rewrote it at home because I thought it may be of use generally. I then extended that with a T4 template that generates an MVC 6 REST API...meaning all the common patterns that you typically see in a REST API were represented.

The actual VS 2015 C# solution is on Github. The implementation itself is a bit more sophisticated than described here, for reasons of brevity.

The purpose of the DSL input file is to describe resources, their associated database base query, and any expansions and how they are realised. There are two formats supported - plain text and JSON.

An elided text file example for Blogs is:

1:  tag=Blogs  
2:  singular-tag=Blog  
3:  model=Blog  
4:  # API usage  
5:  restResourceIdProperty=BlogId  
6:  restResourceIdPropertyType=int  
7:  #  
8:  baseQuery=  
9:  (await ctx.Blogs  
10:  .AsNoTracking()  
11:  .Where(expr)  
12:  .Select(b => new { Blog = b })  
13:  {joins}  
14:  {extraWhere}  
15:  .OrderBy(a => a.Blog.BlogId)  
16:  .Skip(skip)   
17:  .Take(top)  
18:  .ToListAsync())  
19:  #  
20:  expansion=Posts  
21:  IEnumerable<Post>  
22:  .GroupJoin(ctx.Posts, a => a.Blog.NonKeyField, post => post.NonKeyField, {selector})  
23:  #   
24:  expansion=Readers  
25:  IEnumerable<Party>  
26:  .GroupJoin(ctx.Parties.Where(r => r.Disposition == "reader"),   
27:     a => a.Blog.NonKeyField, party => party.NonKeyField, {selector})  
28:  #   
29:  expansion=Author  
30:  Party  
31:  .Join(ctx.Parties, a => a.Blog.NonKeyField, party => party.NonKeyField, {selector})  
32:  .Where(p => p.Author.Disposition == "author")  

Relevant lines:

  • Line 1: starts a resource definition
  • Lines 5-6: allow this DSL instance to be used to generate a REST API
  • Lines 8-18: The base query to find blogs, along with specific markup that will be changed the DSL processor (e.g. {selector}, {joins} and so on)
  • Lines 24-27: A definition of an expansion - linking a reader to a blog if a Party entity has a disposition of "reader" and the column "NonKeyField" of a Blog object matches the same column in a Party object. The expansion results in an IEnumerable<Party> object.
  • Lines 29-32: an Author expansion, this time (line 32) including a predicate to apply

Example class
After running the T4 template over the DSL file, a c# file is produced that includes a number of classes that implement the intent of the DSL instance.

The Blogs class (as generated) starts like this:

1:  public partial class BlogsQueryHandler : BaseQueryHandling {   
3:    protected override string TagName { get; } = "Blogs";  
5:    public const string ExpandPosts = "Posts";  
6:    public const string ExpandAuthor = "Author";  
7:    public const string ExpandReaders = "Readers";  
9:    public override IEnumerable<string> SupportedExpansions   
10:          { get; } = new [] { "Posts", "Author", "Readers"};  


  • Line 1: The Blogs query handler class subtypes a base type generated in the T4 that provides some common to be inherited behaviour for all generated classes
  • Lines 5-7: All the expansions defined in the DSL instance are exposed
  • Lines 9-10: An enumerable of all supported expansions is likewise created

Example method
Harking back to the power set comment, a method is generated for each of the sub sets of the power set that represents the query necessary to realize the intent of the expansion (or lack thereof).

Part of pre-T4 activity generates queries for each sub set using the content of the DSL instance. Methods are named accordingly (there are a number of configuration options in the T4 file, I'm showing the default options at work).

As below, the method name generated in T4 for getting blogs with the Author expansion applied is Get_Blogs_Author (and similarly, Get_Blogs, Get_Blogs_Readers, Get_Blogs_Author_Readers).

1:  private async Task<IEnumerable<CompositeBlog>>   
2:    Get_Blogs_Author(  
3:     BloggingContext ctx,   
4:     Expression<Func<Blog, bool>> expr,   
5:     int top,   
6:     int skip) {   
7:      return   
8:          (await ctx.Blogs  
9:          .AsNoTracking()  
10:          .Where(expr)  
11:          .Select(obj => new { Blog = obj })  
12:          .Join(ctx.Parties,   
13:              a => a.Blog.NonKeyField,   
14:              party => party.NonKeyField,   
15:              (a, author) => new { a.Blog, Author = author})  
16:          .Where(p => p.Author.Disposition == "author")  
17:          .OrderBy(a => a.Blog.BlogId)  
18:          .Skip(skip)  
19:          .Take(top)  
20:          .ToListAsync())  
21:          .Select(a => CompositeBlog.Accept(a.Blog, author: a.Author));  
22:    }  

Some comments:

  • Line 1: Declared privately as more general methods will use the implementation
  • Line 3: The EF context type is part of T4 options configuration
  • Line 4: Any 'root' resource expression to be applied
  • Lines 5-6: Any paging options supplied externally
  • Lines 7-25: The generated query, returning an enumerable of CompositeBlog, a class generated by DSL processing, that can hold the results of expansions and the root object

Generated 'top level' methods
As the generated 'expanded' methods are declared privately, I expose 'top level' methods. This makes the use of the generated class easier, since you pass in the expansions to use, and reflection is used to locate the appropriate implementation to invoke.

Two variants are generated per resource class - one for a collection of resources, one for a specific resource. The 'collection' style entry point is:

1:  public async Task<IEnumerable<CompositeBlog>>  
2:            GetBlogsWithExpansion(  
3:               BloggingContext ctx,   
4:               Expression<Func<Blog, bool>> expr = null,   
5:               int top = 10,   
6:               int skip = 0,   
7:               IEnumerable<string> expansions = null) {   
8:    return await GetMultipleObjectsWithExpansion<CompositeBlog, Blog>  
9:                 (ctx, expr, expansions, top, skip);  
10:  }  


  • Lines 3-7: The EF context to use, along with a base expression (expr) and paging requirements and any expansions to be applied
  • Lines 8-9: Call a method defined in the BaseQueryHandler generated class to find the correct implementation and execute

Example use
Imagine this closely connected to a REST API surface (there is a T4 template that can do this, that integrates with Swashbuckle as well). The paging, expansions and filter (expression)  requirements will passed in with a request from an API consumer, and after being sanitised, will be in turn given to a generated query handler class. So the example given is what one might call contrived.

A concrete test example appears below:

1:  using (BloggingContext ctx = new BloggingContext()) {  
2:   var handler = new BlogsQueryHandler();  
3:   var result = await handler.GetBlogsWithExpansion(  
4:             ctx,   
5:             b => b.BlogId > 100,   
6:             10,   
7:             10,   
8:             BlogsQueryHandler.ExpandAuthor,   
9:             BlogsQueryHandler.ExpandReaders);  
10:   // .. Do something with the result  
11:  }  


  • Line 1: Create a context to use
  • Line 2: Create an instance of the generated class
  • Line 3: Call the collection entry point of the generated class
  • Lines 4-7: Supply the EF context, an expression and top and skip specifications 
  • Lines 8-9: Add in some expansions

The T4 template has a 'header' section that allows for various options to be changed. I won't go into detail, but it is possible to change the base namespace for generated classes, the EF context type needs to be correct, whether a JSON or text format DSL file is being used, whether the 'advanced' DSL form is used - and so on. The GitHub page supplies more detail.

 // ****** Options for generation ******   
 // Namespaces to include (for EF model and so on)  
 var includeNamespaces = new List<string> { "EF.Model" };  
 // The type of the EF context  
 var contextType = "BloggingContext";   
 // Base namespace for all generated objects  
 var baseNamespace = "Complex.Omnibus.Autogenerated.ExpansionHandling";  
 // The DSL instance file extension of interest (txt or json)  
 var srcFormat = "json";  
 // True i the advanced form of a DSL instance template should be used  
 var useAdvancedFormDSL = true;  
 // Form the dsl instance file name to use  
 var dslFile = "dsl-instance" + (useAdvancedFormDSL ? "-advanced" : string.Empty) + ".";  
 // Default top if none supplied  
 var defaultTop = 10;  
 // Default skip if none supplied  
 var defaultSkip = 0;  
 // True if the expansions passed in shold be checked  
 var checkExpansions = true;  
 // If true, then expansions should be title cased e.g. posts should be Posts, readers should be Readers and so on  
 var expansionsAreTitleCased = true;  
 // ****** Options for generation ******   

Friday, October 21, 2016

Angular 2: Creating decorators for property interception

As part of 'polishing' the esoteric languages testbed Angular 2 SPA, I thought it might be useful to allow for storage of entered source code to be auto-magically persisted. This lead me on a small journey into the ng2 decorator mechanisms, which are surprisingly easy to implement and reminiscent of c# attributes, but without the static limitations.

.Net Core MVC hosted solution on GitHub. Node package source also on GitHub.

The essence of the idea was to be able to decorate a property of a type and have any setting of its value to be automatically persisted - along with a suitable getter implementation.

Sort of as shown below, meaning both the language and sourceCode properties should be persistent. The @LocalStorage decoration implies strongly that this persistence should be in HTML 5 local storage.

1:  export class ExecutionComponent {  
2:    @LocalStorage('ELTB') language: string;  
3:    @LocalStorage('ELTB') sourceCode: string;
4:    programOutput = '';  
5:    programInput = '';  
6:    running = false;   
7:    inputRequired = false;  
9:    constructor(private _esolangService: EsolangService) {  
10:      console.log('built EC');  
11:    }  
12:  }  

So, how do you achieve this? There are plenty of detailed articles around for how to implement a decorator (at the class, property etc level), so I'm not going to describe it in detail.

It's easier just to present the code below, which has these main points of interest (note that this is aggregated code for presentation purposes from the node package source for this project):

  • Lines 2-7: Define an interface that represents the 'shape' of an object that can act as an interceptor for property gets and sets
  • Lines 9-14: Another interface, that defines the contract for an options type; one that can be passed as part of the decorator if it is required to achieve more finely grained behaviour, supply a factory for creating DelegatedPropertyAction instances and so on
  • Lines 16-35: the local storage decorator function entry point, that can be called with a union of types; either a string or an object that implements the AccessorOptions interface
  • Lines 37-39: a decorator function entry point for allowing general property interception e.g. as in @PropertyInterceptor('{ storagePrefix: "_", createJsonOverride: false}). An example is show later on.
  • Lines 41-82: A function that returns a function that implements the general property interception behaviour, with its behaviour directed somewhat by an instance of  AccessorOptions
  • Lines 85-113: An implementation of a DelegatedPropertyAction that gets and sets based on local storage

2:  export interface DelegatedPropertyAction {  
3:    propertyKey: string;  
4:    storageKey: string;  
5:    get(): any;  
6:    set(newValue: any): any;  
7:  }  
9:  export interface AccessorOptions {  
10:    storagePrefix?: string;  
11:    factory?(propertyKey: string, storageKey: string): DelegatedPropertyAction;  
12:    preconditionsAssessor?(): boolean;  
13:    createToJsonOverride?: boolean;  
14:  }  
16:  export function LocalStorage(optionsOrPrefix: string | AccessorOptions) {  
17:    function ensureConfigured(opts: AccessorOptions): AccessorOptions {  
18:      opts.preconditionsAssessor =  
19:        opts.preconditionsAssessor ||  
20:        (() => window.localStorage && true);  
21:      opts.factory =  
22:        opts.factory ||  
23:        ((p, c) => new LocalStorageDelegatedPropertyAction(p, c));  
24:      return opts;  
25:    }  
26:    return AccessHandler(  
27:      ensureConfigured(  
28:        typeof optionsOrPrefix === "string" ?  
29:        <AccessorOptions>{  
30:          storagePrefix: optionsOrPrefix,  
31:          createToJsonOverride: true  
32:          }  
33:          : optionsOrPrefix  
34:      ));  
35:  }  
37:  export function PropertyInterceptor(options: AccessorOptions) {  
38:    return AccessHandler(options);  
39:  }  
41:  function AccessHandler(options: AccessorOptions) {  
42:    return (target: Object, key?: string): void => {  
44:      function makeKey(key: string) {  
45:        return (options.storagePrefix || '') + '/' + key;  
46:      }  
48:      if (!options.preconditionsAssessor || options.preconditionsAssessor()) {  
50:        let privateName = '$__' + key, storeKey = makeKey(key);  
52:        target[privateName] = options.factory(key, storeKey);  
54:        Object.defineProperty(target, key, {  
55:          get: function () {  
56:            return (<DelegatedPropertyAction>this[privateName]).get();  
57:          },  
58:          set: function (newVal: any) {  
59:            (<DelegatedPropertyAction>this[privateName]).set(newVal);  
60:          },  
61:          enumerable: true,  
62:          configurable: true  
63:        });  
65:        const notedKey = '_notedKeys', jsonOverride = 'toJSON';  
67:        target[notedKey] = target[notedKey] || [];  
68:        target[notedKey].push(key);  
70:        options.factory(notedKey, makeKey(notedKey)).set(target[notedKey]);  
72:        if (options.createToJsonOverride && !target.hasOwnProperty(jsonOverride)) {  
73:          target[jsonOverride] = function () {  
74:            let knownKeys = Array<string>(target[notedKey]);  
75:            let result = { _notedKeys: knownKeys };  
76:            knownKeys.forEach(x => result[x] = target[x]);  
77:            return result;  
78:          };  
79:        }  
80:      }  
81:    }  
82:  }  
85:  export class LocalStorageDelegatedPropertyAction implements DelegatedPropertyAction {  
87:    storageKey: string;  
88:    propertyKey: string;  
89:    private val: any;  
91:    constructor(propertyKey: string, canonicalKey: string) {  
92:      this.propertyKey = propertyKey;  
93:      this.storageKey = canonicalKey;  
94:      this.val = JSON.parse(;  
95:    }  
97:    get(): any {  
98:      return this.val;  
99:    }  
101:    set(newValue: any) {  
102:      this.write(JSON.stringify(newValue));  
103:      this.val = newValue;  
104:    }  
106:    private read() {  
107:      return localStorage.getItem(this.storageKey) || null;  
108:    }  
110:    private write(val: any) {  
111:      localStorage.setItem(this.storageKey, val);  
112:    }  
113:  }  

So, a contrived re-writing of the very first example, which adds no real value, could be:

1:  @LocalStorage('ELTB') language: string;  
2:  @LocalStorage({   
3:     storagePrefix: 'ELTB',   
4:     factory: (p, c) =>   
5:       new LocalStorageDelegatedPropertyAction(p, c) })   
6:    sourceCode: string;  

The solution on GitHub is a trivial test one, an example from its use is below, showing local storage contents mirroring the page content:

Tuesday, July 19, 2016

Angular JS 2, RxJs, ASP.NET Core, .NETStandard, Typescript, Web sockets - new Github project

In essence
Having had my 'head down' in a rather pressing commercial engagement, I've had little time to experiment with some of the .NET and UI framework ecosystem changes that have been occurring.

So I decided to combine a whole truck load of them into one effort, creating an ASP.NET Core webapp esoteric language testbed (based on my esoteric interpreters GitHub project).

There are some screen shots at the end of this post showing an example in action. It's definitely a WIP, not quite ready to put on GitHub.

  • Communicate with a REST API to determine what languages are available for use (supported languages are determined using a simple plugin system written with the version of MEF 2)
  • Accept a language source program
  • Use web sockets to request that the web app start remote interpretation, and allow client side 'interrupt' when the remotely executing program requires some user input
  • Have the execution page be fully contextual in terms of what is, and is not, permitted at any point

ASP.NET Core webapp
This was reasonably straightforward to put together and get to work. Things that did bite:
  • To work with AngularJS 2 RC2, the version of npm had to be a version different to the one shipped with VS 2015, meaning I had to fiddle with the external web tools to set the path
  • Initial restore of bower and npm packages took a long time, and there was little in the way of progress indication sometimes
  • Adding references to PCL's or .net standard assemblies often blew up the project.json file, resulting in duplicate Microsoft.NetCore.Platforms and Microsoft.NetCore.Targets dependencies that defeated package resolution. Editing project.json by hand cured this, but was not a pleasant experience
  • Running under IIS; what with creating an app pool running no managed code (IIS reverse proxying out to a Kestrel instance running in a different process) and then having to use the publish feature of VS to get it to work - I spent most of my time working with IIS express instead
  • Using a PCL as a reference in the web app causes all sorts of conniptions; VS 2015 still refuses to recognise the interfaces defined in a PCL of my own creation, and sometimes the build would fail. However, building using the .NET core command line tool (dotnet.exe) would cure this. Frustrating. 
AngularJS 2 
I never used Angular prior to v2. Never really had the opportunity, always seemed to be working in shops that used KnockoutJS (which is still a tidy library it must be said) or Ext JS (with its attendant steep learning curve).

Using it for this exercise was a pleasure. Sure, lots of set up issues, churn in the space, changes to routing discovered half way through, using Typescript (I know that is not mandatory!) - but all in all, positive.

There are a fair few components in the solution right now, but the key one is TestBedComponent, which in turn has two child components, LanguageComponent and ExecutionComponent - the first allowing the selection of a language to use for interpretation, with the languages being derived from calling an injected service, the second being responsible for the 'real' work:
  • Allowing the entry of an esoteric program
  • Using an injected service to request remote execution
  • Responding to interrupts from the remote execution, meaning user input is required - showing a text box and button to allow the entry of user input that is then sent via the service to the remote server
The TestBedComponent has this form:

 import { Component, EventEmitter, Input, Output, ViewChild } from "@angular/core";  
 import { LanguageComponent } from './language.component';  
 import { ExecutionComponent } from './execution.component';  
   selector: "testbed",  
   template: `  
       <languageSelector (onLanguageChange)="languageChanged($event)"></languageSelector>  
   directives: [LanguageComponent, ExecutionComponent]  
 export class TestBedComponent {  
   private _executionComponent: ExecutionComponent;  
   currentLanguage: string;  
   languageChanged(arg) {  
     console.log('(Parent) --> Language changed to ' + arg);  

I'm just embedding the two main components in the template, and using a reference to the execution component to communicate a change in the selected language, which is handled by the LanguageComponent, which exposes an event emitter which the test bed component listens to.

There are other ways of doing this, such as using a shared service, but I wanted to experiment with as many different parts of Angular 2 as possible, rather than be a purist :-)

The language component uses an iterated Bootstrap row; it's (too) simple at the moment, but uses an ngFor to present a list of languages discovered after consulting a service, template excerpt as below:

 <div class="row">  
   <div class="col-xs-3" *ngFor="let language of languages">  
    <button class="btn btn-primary" (click)="languageChanged($event)">  

The execution component is a little more interesting, having a more involved template, as below:

 <form (ngSubmit)="run()" #executionForm="ngForm">  
       <div class="row">  
         <div class="col-xs-12">  
           <h4>Source code</h4>  
         <div class="col-xs-12">  
           <textarea cols="80" rows="10" [(ngModel)]="sourceCode" style="min-width: 100%;"   
            name="sourceCode" required [disabled]="running"></textarea>  
       <div class="row">  
         <div class="col-xs-6">  
           <button type="submit" class="btn btn-primary" [disabled]="!executionForm.form.valid || running">  
           <button type="button" class="btn btn-danger" (click)="cancel()" [disabled]="!running">  
       <div class="row" *ngIf="inputRequired">  
         <div class="col-xs-12">  
         <div class="col-xs-12">  
           <input [(ngModel)]="programInput" name="programInput" required/>  
           <button type="button" class="btn btn-primary" (click)="send()"   
       <div class="row">  
         <div class="col-xs-12">  
         <div class="col-xs-12">  
           <textarea cols="80" rows="10" [value]="programOutput" disabled style="min-width: 100%;"></textarea>  

As you can see, it uses a form, and a range of one and two way bindings, and a few ngIf's to control visibility depending on context.

The actual implementation of this component is also quite simple:
1:  export class ExecutionComponent {  
2:    language: string;  
3:    sourceCode: string;  
4:    programOutput = '';  
5:    programInput = '';  
6:    running = false;   
7:    inputRequired = false;  
8:    constructor(private _esolangService: EsolangService) {  
9:      console.log('built EC');  
10:    }  
11:    changeLanguage(lang) {  
12:      this.language = lang;  
13:      console.log(this.sourceCode);  
14:    }  
15:    run() {  
16:      console.log('Run! --> ' + this.sourceCode);  
17:      this.running = true;  
18:      this.programOutput = '';  
19:      this._esolangService.execute(  
20:        this.sourceCode,  
21:        {  
22:          next: m => this.programOutput += m,  
23:          complete: () => this.cancel()  
24:        },  
25:        () => this.inputRequired = true  
26:      );  
27:    }   
28:    send() {  
29:      console.log('Sending ' + this.programInput);  
30:      this._esolangService.send(this.programInput);  
31:      this.inputRequired = false;  
32:    }  
33:    cancel() {  
34:      this.running = this.inputRequired = false;  
35:      this._esolangService.close();  
36:    }   
37:  }  

Lines of interest:

  • 8 - EsoLangService is injected as a private member
  • 11 - target language is changed
  • 19-26 - the eso lang service is asked to execute the supplied source code. A NextObserver<any> is supplied as argument 2, and is hooked up internally within the service to a web sockets RxJs Subject (using as a base). The third argument is a lambda that is called when the service receives a web socket message that indicates that user input is required. On receipt of this, inputRequired changes, which in turn affects this part of the template, displaying the user input text box and Send button:
<div class="row" *ngIf="inputRequired">

Screen shots
Just a few, with WARP as the target, executing a program that derives prime numbers from some user entered upper bound.

Initial page

Source entered

Execution started, but input required interrupt received

Execution complete

Saturday, June 4, 2016

Knockout Validation and ASP.NET MVC view model integration

I came across this particular issue a while ago, where I was using Knockout validation and had some validation annotated c# view models that were delivered by a REST API. I didn't have time on the project where I encountered this to solve it in an elegant fashion - so decided to do that in my spare time.

The problem to solve is to somehow have the validation attributes that have been applied to the C# view models applied in the same manner to view models created in the (Knockout JS based) client.

Doing this by hand is obviously clumsy and error prone, so instead the solution I now have:
  • Exposes Web API end points that can be queried by the client to gather meta data
  • Has a simple Javascript module that can interpret the response from a meta data serving endpoint call, applying Knockout validation directives (extensions) to a view model
The VS 2015 solution lives in GitHub.

A simple example follows - consider the C# view model below:

 public class SimpleViewModel {  
     public string FirstName { get; set; }  
     public string Surname { get; set; }  
     [Required(ErrorMessage = "You must indicate your DOB")]  
     [Range(1906, 2016)]  
     public int YearOfBirth { get; set; }  
     public string Pin { get; set; }  

A web API method that can serve meta data on demand (security considerations ignored). It's all interface driven and pluggable, so not limited to the standard MVC data annotations or Knockout validation translation. Server side view model traversal is recursive and collection aware, so arbitrarily complex view models can be interrogated.

 public class DynamicValidationController : ApiController {  
     public dynamic MetadataFor(string typeName) {  
       return new ValidationMetadataGenerator()  

And finally a very simple client use of the Javascript module. This example HTTP Get's a method that includes the validation meta data along with the view model in a wrapped type, but this need not be the case. The call to vmi.decorate, is the key one, applying as it does the relevant metadata to the ko mapped view model using standard Knockout validation directives.

         function (response) {  
          var obj = ko.mapping.fromJS(response.Model);  
            model: obj,  
            parsedMetadata: response.ValidationMetadata,  
            enableLogging: true,  
            insertedValidatedObservableName: 'validation'   
          ko.validation.init({ insertMessages: false });  
          ko.applyBindings(obj, $('#koContainer')[0]);  

The object passed to decorate or decorateAsync also allows you to supply a property name (insertedValidatedObservableName) that will be set with a validatedObservable created during metadata interpretation - this is a convenience, meaning that after the example code above executes, calling obj.validation.isValid() will return true or false correctly for the entire view model.

Metadata on the wire looks like this:

Saturday, November 7, 2015

Quorum implementation

Having always been interested in distributed consensus/quorum/master slave systems, I decided to implement a form of quorum software (C# of course). It helped me solve an issue I had with a web farm deployment where a Windows service needed to execute on at least/most one machine (the 'master') at any time.

As the machines could be brought in and out of service at any time, the master had to be an elected or agreed active machine. So I wrote Quorum to avoid having a single point of failure.

It's a familiar take on replicated state machines, that does not aspire to the giddy heights of Paxos or Raft. But for all that, it is simple and thus far, reliable (enough :-).

Currently hosted on GitHub, here.

Has a simple MVC web app for quorum viewing, as below:

Tuesday, August 18, 2015

NHooked - a web hook provider framework

Always in need of a side project, I thought I'd create a web hook provider framework - a WIP for sure, but showing promise. I've hosted it on GitHub - repo here.

README content:

nhooked (en-hooked) is a C# framework designed to allow a flexible web hook provider to be created. By flexible, I mean:

  • Reliable
  • Scalable
  • Simple

There are a great number of 'how to's', blogs and the like which describe how to consume web hooks, but few that indicate how to create an implementation that can serve as a web hook provider i.e the source of the events that web hook consumers await.
nhooked tries to provide such a framework, making it as pluggable as possible and provding some base implementations.

Thursday, July 30, 2015

IIS 8.5 awful performance on first request to MVC site

My home dev machine is (currently) running Windows 8.1, with IIS 8.5 therefore, has 16GB RAM, a well proportioned SSD, 4 cores@3.5Ghz. When I develop my MVC applications, I normally run an MVC app itself as a web site under IIS, with its own app pool - mainly because, although IIS express is good, I don't find it good enough for my purposes.

Everything has been going swimmingly. After building any site, response to the first request has been < 5 seconds; sure, some compilation is going on, and I'm in full debug mode, but these times do not matter too much in the grand scheme of things.

Except when it spikes, and remains spiked.

This happened a few days ago. The site I was currently extending ballooned its first request response time from about 4 seconds to 2 minutes and finally to 3 minutes in next to no time at all. Now, I like the opportunity to think as much as the next person, but 3 minutes is beyond a joke.

Cue a Google search regarding such IIS performance issues. The list went on an on as to culprits; perhaps an unreachable database connection, slow DNS lookup, running as 32 bit (!),  all sorts of variations on IIS app pool settings (auto start, suspend and so on), secret Microsoft CRL queries - you get the idea.

A couple of the less whacky ones I tried, to no avail. Then I thought I'd give DebugDiag a go, so downloaded, installed, created some rules, captured some dumps. Nothing.

Back to basics seemed to be in order. So I installed procmon, recycled the app pool in question, set procmon going, filtering on just w3wp.exe.

I then issue the first request, and the response is, as expected, glacial. But this is where a blunt instrument like procmon sometimes wins out - because it highlighted immediately that w3wp.exe was issuing hundreds of thousands of requests to write files. Examining one of the files told me immediately that this was my fault - it was an assembly binding log file, as produced by the Fusion sub system. Fully 2 minutes 50+ was creating log files.

I'd forced Fusion logging on some months before, to debug an obscure issue I had with loading AForge.NET assemblies. So, open regedit, navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion, and set ForceLog to 0.

Recycle app pool, issue first request. 4 second response time again!

Thursday, June 11, 2015

AWS SES, VERP and hMailServer

A .NET MVC (c#) application I'm responsible for has occasion to send emails to clients of the applications users. The email address associated with a client comes from two sources:

  • The application users entering it on behalf of their clients 
  • From third party systems that users can elect to integrate our system with

Of course, we do some basic 'well formed' validation of email addresses, but the acid test comes when you attempt to use that address.

Being fully paid up AWS hosted citizens, we use AWS SES as our SMTP server. But we also self host hMailServer for our domain; for a variety of reasons, it makes sense to partition the 'work horse' email sending to SES, and the basic domain email handling to hMailServer (given that SES does not 'do' POP).

So how do handle bounced emails from SES? We need to let our users know if any of their clients did not receive an email. But how? As usual with SMTP, there is no trivial answer, but thankfully there is a relatively easy one.

I looked at the usual suspects (message id, custom headers and so on), but VERP seemed to provide the best route (and SES supports it).

In passing, I should note it is possible to do some of this with SNS - but really, for our application, it offered no real advantage - 'real time' delivery of notifications was not important, just best efforts, and there was only one sink for the bounce message.

So, in summary, this is what I ended up doing, and it is working well enough in production now.

Dispatching an email to a client: I use the basic SmtpClient of .NET to dispatch the email to the (in our case) single recipient. When a MailMessage is created, the From address is set to an email address verified with SES and belonging to our domain. But I also set the sender address to be a modified form of that address, using a standard VERP style separator, example:

From address:

The point of this is to ensure that the context I need (8922 in the Sender address above) to interpret a bounce message is honoured by SES; this is one of the few ways to do it.

SES interpretation: SES dispatches the email to the recipient, and on detecting some form of failure, sends an email to the Sender address, not the From address; that is, the VERP marked up address, not the plain verified address.

hMailServer: hMailServer is a well featured mail server for Microsoft platforms. But it hides its VERP support well, calling it instead 'Plus addressing'.

In short, if we don't activate this part of hMailServer, our bounce messages from SES will bounce as well.

If you need to activate VERP style addresses for hMailServer, you need to:

  • Open the hMailServer application
  • Click on the domain you are interested in enabling VERP support for
  • Go to the advanced tab
  • Click 'Enabled' for 'Plus addressing'
  • Select a separation character; in my case of course, I used '+'
Final actions: All the above being done, we had a reasonable feedback loop for bounce handling. As the platform we host already has a cooperating set of agents that perform scheduling, it was easy to slot in another component (discovered by MEF) to:
  • Periodically query the mailbox (using OpenPOP), and read the content
  • For every message that has an SES bounce signature and a To address that matches our VERP address, attempt to decode the to address, extract the context, and the act upon it
  • When done with a message that could be handled, audit it and delete it
It's a simple approach I grant you, but one that works without issue to date.

Wednesday, June 3, 2015

OAuth 2.0 frameworks and platforms

Like some of you in the .NET world, I have had occasion to consider the use of OAuth 2.0. My real in depth exposure to OAuth came a few years ago when I was considering the relative merits of Enterprise API vendor offerings  for a bank (think Layer 7, Apigee, WS02 and so on).

A vendor product is all well and good if you have the money (notwithstanding Apigee's efforts in the 'free' space recently). But if you don't, and you work in a Microsoft shop?

What seems to generally the case is that the first framework/library one thinks of is DotNetOpenAuth (DNOA). It has a relatively trouble free 3rd party OAuth provider for MVC integration - very nice.

But on two occasions now, I have tried to use DNOA to build an Authorization server, and it has found every way to confound the effort.

In what ways? Well:

  • Samples that often do not work out of the box
  • Source code that needs tweaking
  • No 'authorization server as a platform' base
  • Poor documentation
So I set to searching again. And encountered the Thinktecture Identity Server (TTIS) - and this proved to be a revelation, in every way possible. As a free offering, it really is an excellent piece of work, helping you create an Authorization Server, and Resource Server integrations without breaking too much of a sweat. What I found:

  • A tidy code base, really, Thinktecture have thought about it
  • A testable code base
  • Interface driven - all or most of the key services can be replaced with your own implementations
  • A basic authorization server implementation
  • A REST API for various identity server operations
  • IIS or self host options - being OWIN driven
  • An entity framework library that allows for easy database hosting of core objects (tokens, clients, consents and so on)
  • Adequate documentation
So, as an example, I needed to replace the IUserService with one that consulted an API instead of some database (somewhere). A breeze. Want to retain client secrets in clear text so that clients can be informed of them if necessary? No problem (let's not argue security on this one!).

So, next time you want to look at open source, free OAuth software - take a long look at TTIS. It really is far more a platform than a library.

Wednesday, December 24, 2014

More esoterica

Lack of posts for a while....been incredibly busy on the side project:

However, did find time to do some 'Deadfish' work - trivial, but amusing:

Sunday, October 27, 2013

knockout js (and others)

For a side project I'm involved with, one of the requirements is to have a web site that is quite slick, eschewing post backs wherever possible. I'm treating each distinct page of the web site like an SPA - being self contained and driven by client side script.

So I'm using a combination of JQuery, JQuery UI and knockout. I appreciate knockout, but it does take some time and patience to craft a working site with it. I'm primarily using it for the templating implementation - I looked at others in this space, such as Handlebars.js and Mustache.js, but settled on knockout.

Here is an extant example (using, horror of horrors, a table - after creating a css implementation that just felt wrong!), binding an 'exercise' object to a table row. I'm using a private HTML 5 style attribute (data-classify) for my own purposes.

 <tbody id='exerciseBody' data-bind='foreach: exercise'>  
  <tr data-bind="attr: { 'data-classify': ex.Definition }">  
  <td><input type="checkbox"   
       data-bind="attr: {   
                name: checkBoxName, value: ex.Id },   
                checked: isChecked"/>Select  
  <td><img width="128" height="72"   
       data-bind="attr: { src: '/AJAX/Image/' + ex.ImageIds[0] }" />  
  <td><img width="128" height="72"  
       data-bind="attr: { src: '/AJAX/Image/' + ex.ImageIds[1] }" />  
  <td data-bind="text: ex.Description">  

This is the view model that is supplied as a member of the bound collection:

  var SelectionViewModel =   
   function (ex, isSelected, updateFn) {  
    this.ex = ex;  
    this.checkBoxName = 'c' + ex.Id;  
    this.isChecked = ko.observable(isSelected);  
    this.isChecked.subscribe(function (s) {  
     (updateFn || $.noop)(s, this.ex);  
    }, this);  

As can be seen, this object subscribes to change events on the isChecked member, which is bound to the 'checked' property of the check box.

An interesting piece of JQuery behaviour is used as well; wild card matching on an element attribute. The JS code below shows this in action - in the else, we locate all elements that have and do not have a specific term held in their (private) data-classify attribute.

 self.selectedTerm.subscribe(function (s) {  
  var searchBase = $("#exerciseBody");  
  if (self.selectedTerm() === anyTerm)  
  else {  
   searchBase.find("tr:not([data-classify*='" + self.selectedTerm() + "'])").hide();  
   searchBase.find("tr[data-classify*='" + self.selectedTerm() + "']").show();  

Saturday, June 29, 2013

The Turing completeness of WARP

While not essential, I thought it might be amusing to be able to state that WARP is Turing complete. Not wishing to appeal to deeper theory, like Turing computable or mu recursive  functions, and not wanting to rely on mere personal belief,  the easiest mechanism to achieve this noble (!) goal was to write a WARP program that could interpret a language known to be Turing complete.

The best candidate was our old Turing tarpit, brainfuck. And thus the hideousness below - but it works...sure, glacial performance, but working. Because the eso interpreters I write are simple C# console apps, I employ the pipe mechanism of the command line to provide the brainfuck source to the WARP bf interpreter; as an example, "Hello world" is shown below (excuse the clumsy line breaks for formatting):

 echo "+++++ +++++[> +++++ ++ > +++++ +++++> +++> +   
 <<<< - ]> ++ .> + .+++++ ++ ..+++ .> ++ .<< +++++ +++++   
 +++++ .> .+++ .----- -.----- --- .> +.> ." | warp brainf.warp  

And the WARP source for the interpreter:
1:  =psN5D=pcps  
2:  @s,l=bs!:bs:0?0?^.p%bs@m}pc>pc1^_m|^.s  
3:  @p=espc=bf0=pcps@r{pc=cc!:"]":cc?0?^.l=ad0:"+":cc  
4:  ?0?=ad1:"-":cc?0?=ad-1:">":cc?0?>bf1  
5:  :"<":cc?0?<bf1:".":cc?0?^.o:bf:0?-1?=bf0{bf>!ad  
6:  }bf@n>pc1:es:pc?0?^.e^.r@o{bf(!^.n@l{bf:0:!?0?^.n=xx0@g<pc1{pc=cp!  
7:  :cp:"]"?0?<xx1:cp:"["?0?>xx1:xx:1?0?^.n^.g@e  

It is a cheat; the source is read (line 2) and placed into WARP's random access stack starting at index N5D, which is one greater than the standard brainfuck cell count. It does not implement the bf , operator, but that would be a simple matter to address. And that in (the released version) just over 300 bytes.

I'm inordinately pleased with it.

Sunday, June 16, 2013

Further WARP programs

I've been busy trying to stabilise the WARP interpreter, and have a few test programs that 'validate' the 1.7 version:

Collatz Conjecture (from 99,000)
 =se24E0)"Hailstone sequence for ")se@a)se(D(A*se#!2:!:0  

Prime number finder
This program uses decimal (+A) instead of hexatrigesimal, and leans heavily on the stack, the ? operator and the new # operator. It misses out 3 and 2 on output, as it uses integral division to ameliorate it's otherwise O(n) behaviour. Of course, halving the search space is still regarded as having O(n) characteristics, but you can notice the difference in performance!
 +A)"Enter start: ",=cu!)"Primes <= ")cu(13(10  
 @i*cu#!ca?0?^.n<ca1:ca:1?1?^.i)cu)" "  

Simple calculator
A very simple integral calculator.
 +A)"WARP simple calculator: Enter q (as operator) to quit, c (as operator) to clear "(13(10  
 =ac0@l)"Enter operator: ",=op!:op:"q"?0?^.e:op:"c"?0?^.c  
 )"Enter operand: ",=nu!:op:"+"?0?>acnu:op:"-"?0?<acnu:op:"*"?0?&acnu:op:"/"?0?$acnu  

Reverse an entered string
This example uses a feature not present in the 1.7 release - stack rotation using the ' operator.
 )"Enter a string to reverse: ",=st!%st@r=ch!'*ch'^_r'@p)!^_p  

Specification here:
Mostly complete interpreter:

Sunday, June 2, 2013

The WARP esoteric language

I thought I'd add my own esoteric language to the considerable pantheon - it's called WARP (a rather poor recursive acronym, WARP and run programming - because the full interpreter should randomize ("warp") the source as it executes). Added it also as a way of cheering myself, as I have been terribly sick over the last week. It has a variable radix system, but starts in hexatridecimal (c.f. hexatrigesimalmode.

Below is a full WARP program that outputs the first 71 numbers in the Fibonacci sequence.

 *1=na1=li1Z@z;)!)" ">!na;<!na=na!<li1^liz  

Specification here:
Mostly complete interpreter:

Saturday, April 20, 2013

MVC action filters - viewmodel to domain object mapping (and policies)

I'm undertaking a rather large task to re-implement an existing ASP.NET Web Forms application using ASP.NET MVC. It's been a thoroughly enjoyable piece of work to date, and, in particular, I have a growing fondness for the MVC filter sub system.

Part of the existing app deals with clients applying for a service/facility. The act of applying uses a simple workflow style, most often similar to Start->Details->Confirmation->Receipt. Each of these steps is an ASPX page, and in the new project, an MVC strongly typed view. The state of the client's application must of course be retained as they navigate this simple workflow, being able to proceed forward and backwards as they need.

The MVC implementation employs view models as the strongly typed objects that the Views consume; I have an existing object to object mapping framework (I was forced to write one before tools such as auto mapper existed, and it's been tweaked considerably for my purposes - for example, it does bi directional mapping by default, and can implicitly convert object types where this makes sense).

Additionally, there are some controller wide specific restrictions that should be applied; policies if you will.

I was interested in the approach espoused by Jimmy Bogard here ( But, IMO, it did not go far enough.

What I have currently is action filters that can apply the controller wide policy (I also have a couple of global filters for site wide policy - something that was done in the ASP.NET web app using http modules).

Additionally, there is a navigation tracking filter, that handles the domain object that is associated with the current client application. On top of that, a mapping filter exists that handles the mapping as required between view model and domain model, updating the domain model held in distributed cache when necessary.

I have 'anonymised' the domain object and view model names, but the intent should be clear from this controller excerpt.

1:  [BasicRestrictionPolicyFilter(RestrictionPolicyType = typeof(TemporalRestriction),   
2:                 Action = "UnavailableFeature",            
3:                 Controller = "Security",   
4:                 RouteName = SharedAreaRegistration.RouteName)]  
5:     [NotifiableRestrictionPolicyFilter(  
6:        RestrictionPolicyType = typeof(EnhancedSecurityRestriction),  
7:        RouteName = SharedAreaRegistration.RouteName,  
8:        MessageType = AlertMessageType.Notice,  
9:        LiteralMessage = "Sorry, you required an enhanced security ability to be active")]  
10:     [NavigationTrackerFilter(DomainObjectType = typeof(SomeApplication))]  
11:     public class ApplyController : BaseController {  
12:        [InjectionConstructor]  
13:        public ApplyController(ISomeService service) {  
14:           SomeService = service;  
15:        }  
16:        [PageFlow]  
17:        [MappingFilter(TargetType = typeof(DetailsViewModel))]  
18:        public ActionResult Start() {  
19:           return View(ViewData.Model);  
20:        }  
21:        [PageFlow(Order = 1)]  
22:        [MappingFilter(TargetType = typeof(DetailsViewModel))]  
23:        public ActionResult Details() {  
24:           return View(ViewData.Model);  
25:        }  
26:        [HttpPost]  
27:        [MappingFilter]   
28:        public ActionResult Details(DetailsViewModel details) {  
29:           if (ModelState.IsValid) return RedirectToAction("Confirmation");  
30:           return View(details);  
31:        }  
32:        [PageFlow(Order = 2)]  
33:        [MappingFilter(TargetType = typeof(SummaryViewModel))]  
34:        public ActionResult Confirmation() {  
35:           return View(ViewData.Model);  
36:        }  
37:        [HttpPost]  
38:        public ActionResult Confirmation(SummaryViewModel summary) {  
39:           SomeService.Order(GetTrackedHostedObject<SomeApplication>());  
40:           return RedirectToAction("Receipt");  
41:        }  
42:        [PageFlow(Order = 3)]  
43:        [MappingFilter(TargetType = typeof(SummaryViewModel))]  
44:        public ActionResult Receipt() {  
45:           return View(ViewData.Model);  
46:        }  
47:        private ISomeService SomeService { get; set; }  
48:     }  

I'll dissect some of this now.
1:  [BasicRestrictionPolicyFilter(RestrictionPolicyType = typeof(TemporalRestriction),   
2:                 Action = "UnavailableFeature",            
3:                 Controller = "Security",   
4:                 RouteName = SharedAreaRegistration.RouteName)]  
5:     [NotifiableRestrictionPolicyFilter(  
6:        RestrictionPolicyType = typeof(EnhancedSecurityRestriction),  
7:        RouteName = SharedAreaRegistration.RouteName,  
8:        MessageType = AlertMessageType.Notice,  
9:        LiteralMessage = "Sorry, you required an enhanced security ability to be active")]  


  • Lines 1-4: Association a controller scope basic policy filter with the Apply controller. This filter instantiates the policy type that is passed to it (TemporalRestriction), asks it if "everything is alright", and if not, redirects the current request to the Security controller, targeting the action "UnavailableFeature"
  • Lines 5-9: Employ a slightly more sophisticated restriction filter, which, on policy failure, redirects the user to a specific 'issues' view that can integrate with our CMS system or use a literal message
This is all rather simple, but the mapping filter behaviour is marginally more complicated.

10:     [NavigationTrackerFilter(DomainObjectType = typeof(SomeApplication))]  
11:     public class ApplyController : BaseController {  
12:        [InjectionConstructor]  
13:        public ApplyController(ISomeService service) {  
14:           SomeService = service;  
15:        }    


  • Line 10: A controller scope navigation tracker filter is declared, that looks after a domain object of type SomeApplication. It's function is really just to ensure that the object exists in the distributed cache we have
  • Lines 11-15: Use Unity to inject a service that is required by the controller

16:        [PageFlow]  
17:        [MappingFilter(TargetType = typeof(DetailsViewModel))]  
18:        public ActionResult Start() {  
19:           return View(ViewData.Model);  
20:        }   

The Start action is the first in the simple workflow we have.


  • Line 16: PageFlow is a simple attribute, not a filter. It is used to support previous/next behaviour. Decorating actions in this fashion allows target actions for the base controller implemented next and previous actions to be inferred automatically. As is seen later in the controller, you can specify an 'order' property, to note the sequence in the workflow where an action 'resides'
  • Line 17: Request that the current domain model (managed by the navigation tracker filter) be mapped into a view model of type DetailsViewModel.
Things get more interesting when types can be inferred, as below:
26:        [HttpPost]  
27:        [MappingFilter]   
28:        public ActionResult Details(DetailsViewModel details) {  
29:           if (ModelState.IsValid) return RedirectToAction("Confirmation");  
30:           return View(details);  
31:        }    


  • Line 26: Note that this is a POST request
  • Line 27: Request mapping of the DetailsViewModel object to the existing Domain object - we know both these types, so no specification of them is necessary in the mapping filter declaration 

37:        [HttpPost]  
38:        public ActionResult Confirmation(SummaryViewModel summary) {  
39:           SomeService.Order(GetTrackedHostedObject<SomeApplication>());  
40:           return RedirectToAction("Receipt");  
41:        }   


  • Line 37: This is the client confirming that they wish to proceed
  • Line 39: Use our Unity injected service to place an order, supplying the domain object we have been tracking and updating.
Again, most of this is quite straightforward. But the mapping filter is performing a number of actions behind the scenes, including:

GET requests
Request that the tracked domain object be mapped into the view model, and set the controllers model to the newly created and populated view model. All this occurs in the OnActionExecuted(...) override (well, not literally, as the mapping filter behaviour is split across a number of classes).

POST requests
Two distinct possibilities here:
  • OnActionExecuting(...):  if the filter has been told to examine the model state, and it is valid, use the mapping service to map the view model into the domain object, and update the distributed cache (with the modified domain object).
  • OnResultExecuting(...): if the model state is invalid, and we have to been told to care about that, ask the mapping service to execute all the 'pre-maps' of the view model, as the act of posting it back will have not done that. If a view model defines pre-maps (think of these as initialization actions), ask the mapping service to execute them on behalf of the view model. This means that the view model will then be in a self consistent state. 
This is a 'toe in the water' implementation at the moment, but it seems to have promise.

Saturday, December 29, 2012

WCF streaming: Random MemoryStream corruption

This one bit me a while ago and I responded to a Stack Overflow post about it - the original poster had changed their implementation to avoid the issue, and slightly missed the point of my answer - which was a solution to a very real (and annoying problem).

As the post describes, returning a MemoryStream from a WCF service can result in the caller receiving a corrupted response. But, as I discovered, only when WCF tracing is active in web.config. So, the solution is simple enough - make sure that web.config does not include a system.diagnostics element. Further testing revealed that this (Microsoft) defect is still present in .NET 4.5.

 Stack overflow post:

Saturday, October 20, 2012

Modelling a file system in Javascript using the Composite pattern

A rather peculiar post this one. I've been building a terminal application using the interesting JQuery Terminal.  As part of this application, I needed to model a file system that could be created dynamically, with the usual suspects involved - directories, files, paths, permissions.

For my purposes, I also required the implementation to be 'pluggable' - that is, where the directory information and file content was sourced would be supplied by external objects.

I considered a few options, but opted finally for the simplest - using the composite pattern. So I have this trivial set of abstractions:

FileResource < FileSystemResource
DirectoryResource < FileSystemResource

where DirectoryResource (1) --- (*) FileSystemResource.

To provide a central point of access, there is a file system manager type, which in the example code only allows one to find a resource (directory/file) or change the current directory.

So here is the code in its entirety, and at the end of the post, some simple test statements. The first few lines are some helper bits and pieces, before starting with the definition of a simple permission.

 jfs = {};  
 // Inheritance helper  
 jfs.extend = function (childClass, parentClass) {  
      childClass.prototype = new parentClass;  
      childClass.prototype.constructor = childClass;  
      childClass.prototype.parent = parentClass.prototype;  
 jfs.fsConfig = {  
      rootDirectory: '/',  
      pathDelimiter: '/',  
      parentDirectory: '..',  
      currentDirectory: '.'  
 // Util  
 Array.prototype.first = function (match, def) {  
      for (var i = 0; i < this.length; i++) {  
          if (match(this[i])) {  
              return this[i];  
      return def;  
 String.prototype.splitButRemoveEmptyEntries = function (delim) {  
      return this.split(delim).filter(function (e) { return e !== ' ' && e !== '' });  
 // Simple permissions group  
 jfs.fileSystemPermission = function (readable, writeable) {  
      this._readable = readable;  
      this._writeable = writeable;  
 jfs.fileSystemPermission.prototype.writeable = function () {  
      return this._writeable;  
 jfs.fileSystemPermission.prototype.readable = function () {  
      return this._readable;  
 jfs.fileSystemPermission.prototype.toString = function () {  
      return (this.readable() ? 'r' : '-').concat((this.writeable() ? 'w' : '-'), '-');  
 jfs.standardPermission = new jfs.fileSystemPermission(true, false);  
 // Base resource  
 jfs.fileSystemResource = function () {  
      this._parent = undefined;  
      this._tags = {};  
 jfs.fileSystemResource.prototype.init = function (name, permissions) {  
      this._name = name;  
      this._permissions = permissions;  
      return this;  
 // Return the contents of the receiver i.e. for cat purposes  
 jfs.fileSystemResource.prototype.contents = function (consumer) {  
 // Return the details of the receiver i.e. for listing purposes  
 jfs.fileSystemResource.prototype.details = function (consumer) {  
      return this.toString();  
 }; = function () {  
      return this._name;  
 jfs.fileSystemResource.prototype.getParent = function () {  
      return this._parent;  
 jfs.fileSystemResource.prototype.getTags = function () {  
      return this._tags;  
 jfs.fileSystemResource.prototype.setParent = function (parent) {  
      this._parent = parent;  
 jfs.fileSystemResource.prototype.permissions = function () {  
      return this._permissions;  
 jfs.fileSystemResource.prototype.type = function () {  
      return '?';  
 jfs.fileSystemResource.prototype.find = function (comps, index) {  
 jfs.fileSystemResource.prototype.absolutePath = function () {  
      return !this._parent ? '' :   
 jfs.fileSystemResource.prototype.toString = function () {  
      return this.type().concat(this._permissions.toString(), ' ', this._name);  
 // Directory  
 jfs.directoryResource = function () {  
      this.children = [];  
 jfs.extend(jfs.directoryResource, jfs.fileSystemResource);  
 jfs.directoryResource.prototype.contents = function (consumer) {  
      return '';  
 jfs.directoryResource.prototype.details = function (consumer) {  
      consumer('total 0');  
      this.applyToChildren(function (kids) { kids.forEach(function(e) { consumer(e.toString()); }) });  
 jfs.directoryResource.prototype.type = function () {  
      return 'd';  
 jfs.directoryResource.prototype.addChild = function (resource) {  
      return this;  
 jfs.directoryResource.prototype.applyToChildren = function (fn) {  
      return this._proxy && this.children.length == 0 ? this._proxy.obtainState(this, fn) : fn(this.children);  
 jfs.directoryResource.prototype.setProxy = function (proxy) {  
      this._proxy = proxy;  
 jfs.directoryResource.prototype.find = function (comps, index) {  
      var comp = comps[index];  
      var node = comp === '' || comp === jfs.fsConfig.currentDirectory ? this :  
                (comp === jfs.fsConfig.parentDirectory ? this.getParent() :   
                this.applyToChildren(function(kids) { return kids.first(function(e) { return === comp; }); }));  
      return !node || index === comps.length - 1 ? node : node.find(comps, index + 1);  
 // File  
 jfs.fileResource = function () {  
 jfs.extend(jfs.fileResource, jfs.fileSystemResource);  
 // consumer should understand:   
 // accept(obj) - accept content  
 // failed   - producer failed, totally or partially  
 jfs.fileResource.prototype.contents = function (consumer) {  
      this._producer(this, consumer || this._autoConsumer);  
 jfs.fileResource.prototype.type = function () {  
      return '-';  
 jfs.fileResource.prototype.plugin = function (producer, autoConsumer) {  
      this._producer = producer;  
      this._autoConsumer = autoConsumer;  
 // FSM  
 jfs.fileSystemManager = function () {  
      this._root = new jfs.directoryResource();  
      this._root.init('', jfs.standardPermission);  
      this._currentDirectory = this._root;  
 jfs.fileSystemManager.prototype.find = function (path) {  
      var components = path.splitButRemoveEmptyEntries(jfs.fsConfig.pathDelimiter);  
      if (components.length === 0) components = [ '.' ];  
      return (path.substr(0, 1) === jfs.fsConfig.rootDirectory ? this._root : this._currentDirectory).find(components, 0);  
 jfs.fileSystemManager.prototype.currentDirectory = function () {  
      return this._currentDirectory;  
 jfs.fileSystemManager.prototype.root = function () {  
      return this._root;  
 jfs.fileSystemManager.prototype.changeDirectory = function (path) {  
      var resource = this.find(path);  
      if (resource) this._currentDirectory = resource;  
      return resource;  

And the test code; it creates a directory under the root called 389, and adds a file (called TestFile) to that directory, plugging in an example 'producer' function (that knows how to get the content of this type of file object) and an auto consumer - that is, a default consumer attached to the object. It is possible to pass any object in when calling the contents function and to not use default consumers at all.

Finally, and for illustration only, we use the file system manager find function to get the actual resource denoted by the full path name, and ask it for its contents. As we have an auto consumer associated with the object, it executes. In this case, we would dump two lines to the console log; 'some' and 'content'.

1:  var m = new jfs.fileSystemManager();   
2:  var d = new jfs.directoryResource();   
3:  d.init('389', jfs.standardPermission);   
4:  m.currentDirectory().addChild(d) ;   
5:  var f = new jfs.fileResource();   
6:  f.init('TestFile', jfs.standardPermission);   
7:  d.addChild(f);   
8:  f.plugin(function(fileResource, consumer) {   
9:         ['some', 'content'].forEach(function(e) { consumer(e) })  
10:      },   
11:      function(e) { console.log(e) });  
12:  var r = m.find('/389/TestFile');  
13:  r.contents();  

Saturday, July 7, 2012

Smalltalk inspired extensions for c#

Having developed in Smalltalk for about 3 years in the early '90's, I still regard the language fondly and am always slightly saddened that it never achieved mainstream adoption.Without Smalltalk, I don't believe I would have so 'easily' became moderately proficient in object oriented thought - note, not analysis or design, but literally thinking 'as an object'. Anthropomorphising is still something I engage in.

Philosophy aside, I amused myself by considering the behaviour of the Smalltalk boolean object a few weeks ago after a brief period of development in Squeak (see also below). You 'talk' to the boolean, and can effectively ask it to do something if it is true or false.

A simple example below, that writes a message to the Transcript window depending on the outcome of the test (which returns a boolean):

 a > b  
 ifTrue:[ Transcript show: 'greater' ]  
 ifFalse:[ Transcript show: 'less or equal' ]  

So, being perverse, what could I do to mimic this behaviour in c#, so I might be able to say:

 (a > b)  
   .IfTrue(() => Console.Write("greater"))  
   .IfFalse(() => Console.Write("less or equal"));  

It's obvious really - use an extension method on the System.Boolean type. This is shown below:

 public static class BoolExtension {  
          public static bool IfTrue(this bool val, Action action) {  
              if (val) action();  
              return val;  
          public static bool IfFalse(this bool val, Action action) {  
              if (!val) action();  
              return val;  

Please don't misinterpret - I'm not espousing this as a necessarily good idea, more demonstrating that extension methods allow one to 'fake' the presence of interesting constructs present in other languages.

From the squeak website:
Welcome to the World of Squeak!Squeak is a modern, open source, full-featured implementation of the powerful Smalltalk programming language and environment. Squeak is highly-portable - even its virtual machine is written entirely in Smalltalk making it easy to debug, analyze, and change. Squeak is the vehicle for a wide range of projects from multimedia applications, educational platforms to commercial web application development.