Showing posts with label c#. Show all posts
Showing posts with label c#. Show all posts

Saturday, January 23, 2021

Brevity or obfuscation in c#

Having a look at the Blazorise github source, I encountered this method:

     protected override string FormatValueAsString( IReadOnlyList<TValue> value )  
     {  
       if ( value == null || value.Count == 0 )  
         return string.Empty;  
       if ( Multiple )  
       {  
         return string.Empty;  
       }  
       else  
       {  
         if ( value[0] == null )  
           return string.Empty;  
         return value[0].ToString();  
       }  
     }  
Doesn't that seem like a lot of code for some simple behaviour ? I've seen development styles like this before, often with the argument that it's easy to read. I suppose that is true. 

But it also misses the point -- you are not using the c# language and libraries to their full extent...this does:
     protected override string FormatValueAsString(IReadOnlyList<TValue> values)  {  
          var result = values?.FirstOrDefault()?.ToString();  
          return result == null || Multiple ? string.Empty : result;  
     }  
It's shorter, loses no meaning, uses LINQ acceptably and has one return statement instead of four. I also took the liberty of removing the strange spacing around the method argument and naming it more appropriately. And using the K&R statement styling. 

However, at the expense of some readability, but with a better performance profile, you could write:
     protected override string FormatValueAsString(IReadOnlyList<TValue> values)    
         =>  Multiple ? string.Empty : values?.FirstOrDefault()?.ToString() ?? string.Empty;  

If I was being picky:
  • I'd have a left a QA comment asking if the method name was really appropriate -- it's borderline IMO. 
  • The shorter implementations allow for the possibility that the ToString() method of TValue might return null (a defect in that case) - you can't discount that as a possibility, and it would possibly break the caller of the method
  • An engineering approach might include a pre-condition that 'values' cannot be null
  • A post condition would be that the return result is always not null
  • The use of 'Multiple' looks a little forced - without delving further, could this test even be avoided altogether and be outside?
I'm very much a fan of internal as well as external quality. 

Wednesday, January 16, 2019

Angular versus Blazor - working SPA examples compared

Overview

Most of us have observed the ascent of Angular (in all its versions) over the last few years. I enjoy working with Angular, but it does feel on occasion that it complicates matters for little gain. Sure, compared to KnockoutJS and ExtJS it works very well, but something always nags a little.

I have been following the evolution of Blazor with interest. I won't describe it in detail, but its use of WASM, Mono and the ability to create an SPA with (mostly just) c#, is appealing. All the usual arguments in favour of such an approach apply. It's only an alpha framework, but I thought it might instructive/amusing to attempt to re-create an SPA I have using just Blazor, and compare the results.

The SPA

I have more than a passing interest in esoteric languages, and wrote one myself (WARP) for a laugh.

The SPA has these features:

  • Routing 
  • Use of MEF discovered language interpreters via a trivial.NET Core API
  • The ability to switch between languages 
  • Enter source code for a particular language that is dispatched to the API for execution
  • Respond to 'interrupts' received from the API, which signal that a user is required to enter input of some kind
  • Display output as it is received from the API execution of the source code supplied
  • The ability cancel execution if a program is slow (esoteric languages tend to be interpreted and seemingly simple tasks can be glacial in terms of execution speed)
  • Display a summary of the language as simple text
  • Provide an off site link to examine the language in greater detail
There is a project on GitHub with full source. Note that web sockets are used to communicate between client and server. Notes on building and running are at the end of this post.

Angular SPA
Angular 7 is used as the base framework,  using  the angular2-websocket module, which still seems the best for web sockets. It's all hosted in VS 2017, and uses ng build (not webpack or similar). It's reasonably straightforward.

Blazor SPA
Built with Blazor.Browser 0.7.0 (client) and Blazor.Server 0.7.0 (server). Given the 3 models of Blazor deployment, the one chosen is an ASP.NET Core model.


Screen grabs
A couple of screen grabs, noting that I did not attempt to make the UI's identical. The images show the execution of a prime number 'finder' written in WARP, both given a start point of 199.

Angular


Blazor



Differences
There are some subtle differences, aside from the not so subtle use of c# and Razor as opposed to Typescript and HTML.

Binding
The source code text area (see screen grabs below) should be an 'instant' binding, that is, any key press should affect the state of the Run button. If you have not entered source code, you can't run obviously, but as soon as you enter one character, that is possibly a viable esoteric program.

In Angular, using a plain form, it's easy enough, using ngModel, and required and disabled attributes:

 <div class="row">  
      <div class="col-12">  
           <textarea cols="80" rows="10"   
             [(ngModel)]="sourceCode" style="min-width: 100%;"   
             name="sourceCode" required [disabled]="running">  
           </textarea>  
         </div>  
 </div>  
 <p></p>  
 <div class="row">  
    <div class="col-12">  
      <button type="submit" class="btn btn-primary"   
         [disabled]="!executionForm.form.valid || running">  
            Run  
       </button>&nbsp;    
       <button type="button" class="btn btn-danger"   
           (click)="cancel()" [disabled]="!running">  
            Cancel  
        </button>    
      </div>  
  </div>   

It was almost as straightforward in Blazor, but with a quirk:

 <div class="row">  
     <div class="col-12">  
         <textarea cols="80" rows="10" bind="@SourceCode" style="min-width: 100%;"  
              name="sourceCode" required   
              onkeyup="this.dispatchEvent(new Event('change', { 'bubbles': true }));">   
         </textarea>  
     </div>  
 </div>  
 <p></p>  
 <div class="row">  
     <div class="col-12">  
         <button type="submit" class="btn btn-primary" onclick="@Run"   
               disabled='@(NotRunnable || Running)'>  
             Run  
         </button>&nbsp;  
         <button type="button" class="btn btn-danger"   
             disabled="@(Running == false)" onclick="@StopExecution">  
             Cancel  
         </button>  
     </div>  
 </div>  

Now the disabled attributes behaviour is fine, just a bit of Razor. But the part I didn't like or want is the addition of an onkeyup handler on the textarea. However, without this, the source code only updates when the textarea loses focus, which is not the behaviour that the Angular SPA has (and is the correct behaviour).

Attributes
If you are not used to Razor the attribute usage looks a little strange. It's also not semi abstracted in the way that Angular is (compare 'click' with 'onclick'). But I can't say that it bothers me that much.

Sharing code
These SPA's are very simple, and really only have one shared type across them, an object called LanguageMetadata (which is a simple data object that holds an example of a language that is supported by the ELTB service/API). With Blazor, I can share that between client and server, by having a separate class library project referenced by both of them. However, with Angular, I have to define an interface (well, I don't, but it is nicer to do so) - so I haven't shared anything, I have copied something.

For these SPA's, it's not a big deal. But for more complex projects (and I've worked on some) the possible sharing approach of Blazor could be exceptionally useful.

Http client
Angular makes a lot of noise about it's use of Rx and Observables - and yes, it is very appealing (just came off a project where Rx.NET was used heavily). Blazor can afford to take a different approach, using a 'standard' HttpClient with an async call.

It certainly has a more natural look and feel (excuse the hard coded URL's - it's just an example after all!):

Angular

  supportedLanguages() {  
   return this  
    ._http  
    .get(this.formUrl(false))  
    .pipe(map((data: any[]) => {  
     return <LanguageDescription[]>data  
    }));  
  }  

Blazor
 protected override async Task OnInitAsync() {  
     LanguageMetadata.All =   
        await httpClient.GetJsonAsync<List<LanguageMetadata>>   
            ("http://localhost:55444/api/EsotericLanguage/SupportedLanguages");  
 }  

When I look at it, the Ng approach with pipe and map just looks a little fussy.

Web sockets
Not all of the .Net API's you might want exist in Mono. One such is the web sockets API, which underpins the implementation of both versions of the SPA. I couldn't use something like SignalR (it is supported by Blazor), as I have distinct request/response semantics when user input is required for an executing piece of esoterica.

My understanding is that support is coming, but the Javascript interop of Blazor allowed me to solve the issue relatively quickly. Unfortunately, it meant writing some raw JS to do so, as below:

 window.websocketInterop = {  
     socket: null,  
     connect: function (url, helper, msg) {  
         console.log("Connecting");  
         socket = new WebSocket(url);  
         socket.onopen = function (evt) {  
             msg && socket.send(msg);  
         }  
         socket.onmessage = function (event) {  
             console.debug("WebSocket message received:", event);  
             helper.invokeMethod("OnMessage", event.data);  
         };  
         socket.onclose = function (evt) {  
             console.log("Socket closed. Notify this..");  
             helper.invokeMethod("OnChannelClose");  
         }  
         console.log("Connected and ready....");  
     },  
     send: function (msg) {  
         console.log("Sending:" + msg);  
         socket.send(msg);  
     },  
     close: function () {  
         console.log("Closing socket on demand");  
         socket && socket.close();  
     }  
 };  

(This is not anywhere near a production implementation).

The interop parts are seen in the cshtml file, InterpreterContent.cshmtl. For example, when the esoteric source code is sent (after pressing the Run button), it invokes the JS function 'webSocketInterop.connect' defined previously, sending it a url to connect to, a DotNetRefObject and the actual source code as the first message to dispatch on the web socket:

 async Task Run() {  
         Output = string.Empty;  
         Running = true;  
         await JSRuntime.Current.InvokeAsync<object>  
                ("websocketInterop.connect",   
                InterpreterServiceUrl,   
                new DotNetObjectRef(this),   
                $"|{Language}|{SourceCode}");  
         StateHasChanged();  
 }  

The DotNetRefObject encapsulates 'this' for this implementation, and allows the JS to call back into the 'this' instance. For example, when the socket is closed by the interpreter service (as it does when execution has completed),  the JS calls
 
             helper.invokeMethod("OnChannelClose");  
 
which is defined in the cshtml file as:

 
     [JSInvokable]  
     public void OnChannelClose() {  
         Running = false;  
         StateHasChanged();  
 }  

with JSInvokable making it available to JS, and when called, sets Running to false, which will update the UI such that the Run button is now enabled, and the Cancel button disabled. Note the use of StateHasChanged, which propagates state change notification.

It's a double edged sword - the interop is well done, simple, works. But it should be a feature that is used infrequently.

Source code organization
One of the frequent criticisms of the Razor world is that it lets you mix in code and HTML freely, giving it a somewhat 'classic ASP' feel if one is not careful. The SPA Blazor implementation is an example of that, I haven't attempted to make it modular or separate it out particularly.

But for established Razor shops, with good or reasonable practice, this is easy to address.

Less code
I definitely ended up with less code in the Blazor version. It's much easier to understand, builds quicker and means my c# knowledge can be used directly in the main. 

Unit testing
I didn't implement any unit tests for the purpose of this exercise, it's not destined for production after all. Angular et al have good tools in this area, Jasmine, Karma and so on. But Blazor allows for componentization which will support unit tests easily enough. Probably a draw in this regard.

Summary
Blazor is indeed an interesting concept; currently incomplete, not ready for production and a little slow on initial use. But the promise is there, but I suppose we'll have to wait and see if MS continue with it, because as many others have noted, this is the sort of project that can arrive with a muted fanfare, gain some traction and then disappear.

Being standards based helps its case, as the Silverlight debacle might illustrate. The considerable ecosystems of Angular, React and others might keep it at bay for a while if it makes it to full production use, but I think there is room for it.

Building and running from GitHub If you fancy building and running the examples, once cloned or downloaded from GitHub, and built - you then have to unzip the file API\ELTB-Services\interpreters-netcoreapp2.1.zip and move the assemblies therein to API\ELTB-Services\bin\Debug\netcoreapp2.1.

This is because the interpreter service relies on these to exist and be discoverable by MEF, and I didn't go to the trouble of fully integrating a build.

Wednesday, April 19, 2017

Text template engine for generating content

I implemented this a while ago for the startup, been meaning to publish it to github, and now have - here. It was used to support a multitude of text templates that had to be transformed to become part of email message content - and a way to do that which was flexible was required.

The idea is trivial, treat a stream of bytes/characters as containing substitution tokens and rewrite those tokens using a supplied context. Also includes iteration, expression support, context switching and a few other minor aspects.

It's a VS 2017, C# solution, targeting .netcoreapp 1.1 and .net 4.6+.

Once set up, transformation is ultra trivial, assuming a text template and some domain object being supplied to the pro forma method shown below:

        private string GenerateMessage<TObject>(string doc, TObject ctx) {
            ParseResult res = Parser.Parse(doc);
            EvaluationContext ec = EvaluationContext.From(ctx);
            var ctx = res.Execute(ExecutionContext.Build(ec));
            return ctx.context.ToString();
        }




Tuesday, January 3, 2017

REST API with a legacy database (no foreign keys!) - a T4 and mini DSL solution

Overview
Encountered yet again; a legacy database, with no foreign keys, hundreds of tables, that formed the backbone of a REST API (CRUD style), supporting expansions and the like.

It wasn't that there were not identifiable relationships between objects, just the the way the database was generated meant that they were not captured in the database schema. But expansions had to be supported in an OData like fashion. So, assuming you had a resource collection called Blogs, and each Blog object had sub resources of Author and Readers, you should be able to issue a request like the following for a Blog with an id of 17:

http://..../api/Blogs/17?expand=Author,Readers

and expect a response to include expansions for Author and Readers.

That's easy then. Just use entity framework mapping/fluent API to configure an independent association with any referenced objects. Well, that can work and often does. But it does not cope well when some selected OData abstractions are included in the mix - and I was using these to expose some required filtering capabilities to consumers of the REST API. Simply put, when you were creating an ODataQueryContext using the ODataConventionModelBuilder type, independent associations cause it to implode in a most unpleasant fashion.

So, if I can't use independent associations, and each resource may have 1..n associations which are realised using joins, I can:
  • Write a mega query that always returns everything for a resource, all expansions included
  • Write specific queries by hand for performance reasons, as the need arises
  • Generate code that map expansions to specific implementations
Writing by hand was going to be tedious, especially as some of the resources involved had 4 or more expansions.

When I thought about the possible expansions for a resource, and how they can be associated to that resource, using non FK joins, it became apparent that I was dealing with a power set of possibilities.

For the Blogs example, with expansions of Author and Readers, I'd have the power set:

{ {}, {Author}, {Readers} {Author, Readers} }

So the idea that formed was:
  • Use a mini DSL to capture, for a resource, its base database query, and how to provision any expansions
  • Process that DSL to generate pro forma implementations
  • Use a T4 text template to generate C# code
I solved this in one way for a client, but then completely rewrote it at home because I thought it may be of use generally. I then extended that with a T4 template that generates an MVC 6 REST API...meaning all the common patterns that you typically see in a REST API were represented.

The actual VS 2015 C# solution is on Github. The implementation itself is a bit more sophisticated than described here, for reasons of brevity.

DSL 
The purpose of the DSL input file is to describe resources, their associated database base query, and any expansions and how they are realised. There are two formats supported - plain text and JSON.

An elided text file example for Blogs is:

1:  tag=Blogs  
2:  singular-tag=Blog  
3:  model=Blog  
4:  # API usage  
5:  restResourceIdProperty=BlogId  
6:  restResourceIdPropertyType=int  
7:  #  
8:  baseQuery=  
9:  (await ctx.Blogs  
10:  .AsNoTracking()  
11:  .Where(expr)  
12:  .Select(b => new { Blog = b })  
13:  {joins}  
14:  {extraWhere}  
15:  .OrderBy(a => a.Blog.BlogId)  
16:  .Skip(skip)   
17:  .Take(top)  
18:  .ToListAsync())  
19:  #  
20:  expansion=Posts  
21:  IEnumerable<Post>  
22:  .GroupJoin(ctx.Posts, a => a.Blog.NonKeyField, post => post.NonKeyField, {selector})  
23:  #   
24:  expansion=Readers  
25:  IEnumerable<Party>  
26:  .GroupJoin(ctx.Parties.Where(r => r.Disposition == "reader"),   
27:     a => a.Blog.NonKeyField, party => party.NonKeyField, {selector})  
28:  #   
29:  expansion=Author  
30:  Party  
31:  .Join(ctx.Parties, a => a.Blog.NonKeyField, party => party.NonKeyField, {selector})  
32:  .Where(p => p.Author.Disposition == "author")  

Relevant lines:

  • Line 1: starts a resource definition
  • Lines 5-6: allow this DSL instance to be used to generate a REST API
  • Lines 8-18: The base query to find blogs, along with specific markup that will be changed the DSL processor (e.g. {selector}, {joins} and so on)
  • Lines 24-27: A definition of an expansion - linking a reader to a blog if a Party entity has a disposition of "reader" and the column "NonKeyField" of a Blog object matches the same column in a Party object. The expansion results in an IEnumerable<Party> object.
  • Lines 29-32: an Author expansion, this time (line 32) including a predicate to apply

Example class
After running the T4 template over the DSL file, a c# file is produced that includes a number of classes that implement the intent of the DSL instance.

The Blogs class (as generated) starts like this:

1:  public partial class BlogsQueryHandler : BaseQueryHandling {   
2:    
3:    protected override string TagName { get; } = "Blogs";  
4:    
5:    public const string ExpandPosts = "Posts";  
6:    public const string ExpandAuthor = "Author";  
7:    public const string ExpandReaders = "Readers";  
8:      
9:    public override IEnumerable<string> SupportedExpansions   
10:          { get; } = new [] { "Posts", "Author", "Readers"};  
11:    

Points:

  • Line 1: The Blogs query handler class subtypes a base type generated in the T4 that provides some common to be inherited behaviour for all generated classes
  • Lines 5-7: All the expansions defined in the DSL instance are exposed
  • Lines 9-10: An enumerable of all supported expansions is likewise created

Example method
Harking back to the power set comment, a method is generated for each of the sub sets of the power set that represents the query necessary to realize the intent of the expansion (or lack thereof).

Part of pre-T4 activity generates queries for each sub set using the content of the DSL instance. Methods are named accordingly (there are a number of configuration options in the T4 file, I'm showing the default options at work).

As below, the method name generated in T4 for getting blogs with the Author expansion applied is Get_Blogs_Author (and similarly, Get_Blogs, Get_Blogs_Readers, Get_Blogs_Author_Readers).


1:  private async Task<IEnumerable<CompositeBlog>>   
2:    Get_Blogs_Author(  
3:     BloggingContext ctx,   
4:     Expression<Func<Blog, bool>> expr,   
5:     int top,   
6:     int skip) {   
7:      return   
8:          (await ctx.Blogs  
9:          .AsNoTracking()  
10:          .Where(expr)  
11:          .Select(obj => new { Blog = obj })  
12:          .Join(ctx.Parties,   
13:              a => a.Blog.NonKeyField,   
14:              party => party.NonKeyField,   
15:              (a, author) => new { a.Blog, Author = author})  
16:          .Where(p => p.Author.Disposition == "author")  
17:          .OrderBy(a => a.Blog.BlogId)  
18:          .Skip(skip)  
19:          .Take(top)  
20:          .ToListAsync())  
21:          .Select(a => CompositeBlog.Accept(a.Blog, author: a.Author));  
22:    }  

Some comments:

  • Line 1: Declared privately as more general methods will use the implementation
  • Line 3: The EF context type is part of T4 options configuration
  • Line 4: Any 'root' resource expression to be applied
  • Lines 5-6: Any paging options supplied externally
  • Lines 7-25: The generated query, returning an enumerable of CompositeBlog, a class generated by DSL processing, that can hold the results of expansions and the root object

Generated 'top level' methods
As the generated 'expanded' methods are declared privately, I expose 'top level' methods. This makes the use of the generated class easier, since you pass in the expansions to use, and reflection is used to locate the appropriate implementation to invoke.

Two variants are generated per resource class - one for a collection of resources, one for a specific resource. The 'collection' style entry point is:

1:  public async Task<IEnumerable<CompositeBlog>>  
2:            GetBlogsWithExpansion(  
3:               BloggingContext ctx,   
4:               Expression<Func<Blog, bool>> expr = null,   
5:               int top = 10,   
6:               int skip = 0,   
7:               IEnumerable<string> expansions = null) {   
8:    return await GetMultipleObjectsWithExpansion<CompositeBlog, Blog>  
9:                 (ctx, expr, expansions, top, skip);  
10:  }  
11:    
12:    

Comments:

  • Lines 3-7: The EF context to use, along with a base expression (expr) and paging requirements and any expansions to be applied
  • Lines 8-9: Call a method defined in the BaseQueryHandler generated class to find the correct implementation and execute

Example use
Imagine this closely connected to a REST API surface (there is a T4 template that can do this, that integrates with Swashbuckle as well). The paging, expansions and filter (expression)  requirements will passed in with a request from an API consumer, and after being sanitised, will be in turn given to a generated query handler class. So the example given is what one might call contrived.

A concrete test example appears below:

1:  using (BloggingContext ctx = new BloggingContext()) {  
2:   var handler = new BlogsQueryHandler();  
3:   var result = await handler.GetBlogsWithExpansion(  
4:             ctx,   
5:             b => b.BlogId > 100,   
6:             10,   
7:             10,   
8:             BlogsQueryHandler.ExpandAuthor,   
9:             BlogsQueryHandler.ExpandReaders);  
10:   // .. Do something with the result  
11:  }  

Comments:

  • Line 1: Create a context to use
  • Line 2: Create an instance of the generated class
  • Line 3: Call the collection entry point of the generated class
  • Lines 4-7: Supply the EF context, an expression and top and skip specifications 
  • Lines 8-9: Add in some expansions

Customisation
The T4 template has a 'header' section that allows for various options to be changed. I won't go into detail, but it is possible to change the base namespace for generated classes, the EF context type needs to be correct, whether a JSON or text format DSL file is being used, whether the 'advanced' DSL form is used - and so on. The GitHub page supplies more detail.


 // ****** Options for generation ******   
 // Namespaces to include (for EF model and so on)  
 var includeNamespaces = new List<string> { "EF.Model" };  
 // The type of the EF context  
 var contextType = "BloggingContext";   
 // Base namespace for all generated objects  
 var baseNamespace = "Complex.Omnibus.Autogenerated.ExpansionHandling";  
 // The DSL instance file extension of interest (txt or json)  
 var srcFormat = "json";  
 // True i the advanced form of a DSL instance template should be used  
 var useAdvancedFormDSL = true;  
 // Form the dsl instance file name to use  
 var dslFile = "dsl-instance" + (useAdvancedFormDSL ? "-advanced" : string.Empty) + ".";  
 // Default top if none supplied  
 var defaultTop = 10;  
 // Default skip if none supplied  
 var defaultSkip = 0;  
 // True if the expansions passed in shold be checked  
 var checkExpansions = true;  
 // If true, then expansions should be title cased e.g. posts should be Posts, readers should be Readers and so on  
 var expansionsAreTitleCased = true;  
 // ****** Options for generation ******   

Saturday, November 7, 2015

Quorum implementation

Having always been interested in distributed consensus/quorum/master slave systems, I decided to implement a form of quorum software (C# of course). It helped me solve an issue I had with a web farm deployment where a Windows service needed to execute on at least/most one machine (the 'master') at any time.

As the machines could be brought in and out of service at any time, the master had to be an elected or agreed active machine. So I wrote Quorum to avoid having a single point of failure.

It's a familiar take on replicated state machines, that does not aspire to the giddy heights of Paxos or Raft. But for all that, it is simple and thus far, reliable (enough :-).

Currently hosted on GitHub, here.

Has a simple MVC web app for quorum viewing, as below:


Tuesday, August 18, 2015

NHooked - a web hook provider framework

Always in need of a side project, I thought I'd create a web hook provider framework - a WIP for sure, but showing promise. I've hosted it on GitHub - repo here.

README content:

nhooked (en-hooked) is a C# framework designed to allow a flexible web hook provider to be created. By flexible, I mean:

  • Reliable
  • Scalable
  • Simple

There are a great number of 'how to's', blogs and the like which describe how to consume web hooks, but few that indicate how to create an implementation that can serve as a web hook provider i.e the source of the events that web hook consumers await.
nhooked tries to provide such a framework, making it as pluggable as possible and provding some base implementations.

Saturday, June 29, 2013

The Turing completeness of WARP

While not essential, I thought it might be amusing to be able to state that WARP is Turing complete. Not wishing to appeal to deeper theory, like Turing computable or mu recursive  functions, and not wanting to rely on mere personal belief,  the easiest mechanism to achieve this noble (!) goal was to write a WARP program that could interpret a language known to be Turing complete.

The best candidate was our old Turing tarpit, brainfuck. And thus the hideousness below - but it works...sure, glacial performance, but working. Because the eso interpreters I write are simple C# console apps, I employ the pipe mechanism of the command line to provide the brainfuck source to the WARP bf interpreter; as an example, "Hello world" is shown below (excuse the clumsy line breaks for formatting):

 echo "+++++ +++++[> +++++ ++ > +++++ +++++> +++> +   
 <<<< - ]> ++ .> + .+++++ ++ ..+++ .> ++ .<< +++++ +++++   
 +++++ .> .+++ .----- -.----- --- .> +.> ." | warp brainf.warp  

And the WARP source for the interpreter:
1:  =psN5D=pcps  
2:  @s,l=bs!:bs:0?0?^.p%bs@m}pc>pc1^_m|^.s  
3:  @p=espc=bf0=pcps@r{pc=cc!:"]":cc?0?^.l=ad0:"+":cc  
4:  ?0?=ad1:"-":cc?0?=ad-1:">":cc?0?>bf1  
5:  :"<":cc?0?<bf1:".":cc?0?^.o:bf:0?-1?=bf0{bf>!ad  
6:  }bf@n>pc1:es:pc?0?^.e^.r@o{bf(!^.n@l{bf:0:!?0?^.n=xx0@g<pc1{pc=cp!  
7:  :cp:"]"?0?<xx1:cp:"["?0?>xx1:xx:1?0?^.n^.g@e  

It is a cheat; the source is read (line 2) and placed into WARP's random access stack starting at index N5D, which is one greater than the standard brainfuck cell count. It does not implement the bf , operator, but that would be a simple matter to address. And that in (the released version) just over 300 bytes.

I'm inordinately pleased with it.

Sunday, June 16, 2013

Further WARP programs

I've been busy trying to stabilise the WARP interpreter, and have a few test programs that 'validate' the 1.7 version:

Collatz Conjecture (from 99,000)
 =se24E0)"Hailstone sequence for ")se@a)se(D(A*se#!2:!:0  
 ?1?^.o$se2^.r@o&se3>se1@r:se:1?1?^.a  

Prime number finder
This program uses decimal (+A) instead of hexatrigesimal, and leans heavily on the stack, the ? operator and the new # operator. It misses out 3 and 2 on output, as it uses integral division to ameliorate it's otherwise O(n) behaviour. Of course, halving the search space is still regarded as having O(n) characteristics, but you can notice the difference in performance!
 +A)"Enter start: ",=cu!)"Primes <= ")cu(13(10  
 @o*cu$!2=ca!  
 @i*cu#!ca?0?^.n<ca1:ca:1?1?^.i)cu)" "  
 @n<cu1*cu<!1^!o  

Simple calculator
A very simple integral calculator.
 +A)"WARP simple calculator: Enter q (as operator) to quit, c (as operator) to clear "(13(10  
 =ac0@l)"Enter operator: ",=op!:op:"q"?0?^.e:op:"c"?0?^.c  
 )"Enter operand: ",=nu!:op:"+"?0?>acnu:op:"-"?0?<acnu:op:"*"?0?&acnu:op:"/"?0?$acnu  
 ^.p@c=ac0@p)ac(13(10^.l@e  

Reverse an entered string
This example uses a feature not present in the 1.7 release - stack rotation using the ' operator.
 )"Enter a string to reverse: ",=st!%st@r=ch!'*ch'^_r'@p)!^_p  

Specification here: http://esolangs.org/wiki/WARP
Mostly complete interpreter: http://esotericinterpreters.codeplex.com/

Sunday, June 2, 2013

The WARP esoteric language

I thought I'd add my own esoteric language to the considerable pantheon - it's called WARP (a rather poor recursive acronym, WARP and run programming - because the full interpreter should randomize ("warp") the source as it executes). Added it also as a way of cheering myself, as I have been terribly sick over the last week. It has a variable radix system, but starts in hexatridecimal (c.f. hexatrigesimalmode.

Below is a full WARP program that outputs the first 71 numbers in the Fibonacci sequence.

 *1=na1=li1Z@z;)!)" ">!na;<!na=na!<li1^liz  

Specification here: http://esolangs.org/wiki/WARP
Mostly complete interpreter: https://esotericinterpreters.codeplex.com/

Saturday, April 20, 2013

MVC action filters - viewmodel to domain object mapping (and policies)

I'm undertaking a rather large task to re-implement an existing ASP.NET Web Forms application using ASP.NET MVC. It's been a thoroughly enjoyable piece of work to date, and, in particular, I have a growing fondness for the MVC filter sub system.

Part of the existing app deals with clients applying for a service/facility. The act of applying uses a simple workflow style, most often similar to Start->Details->Confirmation->Receipt. Each of these steps is an ASPX page, and in the new project, an MVC strongly typed view. The state of the client's application must of course be retained as they navigate this simple workflow, being able to proceed forward and backwards as they need.

The MVC implementation employs view models as the strongly typed objects that the Views consume; I have an existing object to object mapping framework (I was forced to write one before tools such as auto mapper existed, and it's been tweaked considerably for my purposes - for example, it does bi directional mapping by default, and can implicitly convert object types where this makes sense).

Additionally, there are some controller wide specific restrictions that should be applied; policies if you will.

I was interested in the approach espoused by Jimmy Bogard here (http://lostechies.com/jimmybogard/2009/06/30/how-we-do-mvc-view-models/). But, IMO, it did not go far enough.

What I have currently is action filters that can apply the controller wide policy (I also have a couple of global filters for site wide policy - something that was done in the ASP.NET web app using http modules).

Additionally, there is a navigation tracking filter, that handles the domain object that is associated with the current client application. On top of that, a mapping filter exists that handles the mapping as required between view model and domain model, updating the domain model held in distributed cache when necessary.

I have 'anonymised' the domain object and view model names, but the intent should be clear from this controller excerpt.

1:  [BasicRestrictionPolicyFilter(RestrictionPolicyType = typeof(TemporalRestriction),   
2:                 Action = "UnavailableFeature",            
3:                 Controller = "Security",   
4:                 RouteName = SharedAreaRegistration.RouteName)]  
5:     [NotifiableRestrictionPolicyFilter(  
6:        RestrictionPolicyType = typeof(EnhancedSecurityRestriction),  
7:        RouteName = SharedAreaRegistration.RouteName,  
8:        MessageType = AlertMessageType.Notice,  
9:        LiteralMessage = "Sorry, you required an enhanced security ability to be active")]  
10:     [NavigationTrackerFilter(DomainObjectType = typeof(SomeApplication))]  
11:     public class ApplyController : BaseController {  
12:        [InjectionConstructor]  
13:        public ApplyController(ISomeService service) {  
14:           SomeService = service;  
15:        }  
16:        [PageFlow]  
17:        [MappingFilter(TargetType = typeof(DetailsViewModel))]  
18:        public ActionResult Start() {  
19:           return View(ViewData.Model);  
20:        }  
21:        [PageFlow(Order = 1)]  
22:        [MappingFilter(TargetType = typeof(DetailsViewModel))]  
23:        public ActionResult Details() {  
24:           return View(ViewData.Model);  
25:        }  
26:        [HttpPost]  
27:        [MappingFilter]   
28:        public ActionResult Details(DetailsViewModel details) {  
29:           if (ModelState.IsValid) return RedirectToAction("Confirmation");  
30:           return View(details);  
31:        }  
32:        [PageFlow(Order = 2)]  
33:        [MappingFilter(TargetType = typeof(SummaryViewModel))]  
34:        public ActionResult Confirmation() {  
35:           return View(ViewData.Model);  
36:        }  
37:        [HttpPost]  
38:        public ActionResult Confirmation(SummaryViewModel summary) {  
39:           SomeService.Order(GetTrackedHostedObject<SomeApplication>());  
40:           return RedirectToAction("Receipt");  
41:        }  
42:        [PageFlow(Order = 3)]  
43:        [MappingFilter(TargetType = typeof(SummaryViewModel))]  
44:        public ActionResult Receipt() {  
45:           return View(ViewData.Model);  
46:        }  
47:        private ISomeService SomeService { get; set; }  
48:     }  

I'll dissect some of this now.
1:  [BasicRestrictionPolicyFilter(RestrictionPolicyType = typeof(TemporalRestriction),   
2:                 Action = "UnavailableFeature",            
3:                 Controller = "Security",   
4:                 RouteName = SharedAreaRegistration.RouteName)]  
5:     [NotifiableRestrictionPolicyFilter(  
6:        RestrictionPolicyType = typeof(EnhancedSecurityRestriction),  
7:        RouteName = SharedAreaRegistration.RouteName,  
8:        MessageType = AlertMessageType.Notice,  
9:        LiteralMessage = "Sorry, you required an enhanced security ability to be active")]  

Explanation:

  • Lines 1-4: Association a controller scope basic policy filter with the Apply controller. This filter instantiates the policy type that is passed to it (TemporalRestriction), asks it if "everything is alright", and if not, redirects the current request to the Security controller, targeting the action "UnavailableFeature"
  • Lines 5-9: Employ a slightly more sophisticated restriction filter, which, on policy failure, redirects the user to a specific 'issues' view that can integrate with our CMS system or use a literal message
This is all rather simple, but the mapping filter behaviour is marginally more complicated.


10:     [NavigationTrackerFilter(DomainObjectType = typeof(SomeApplication))]  
11:     public class ApplyController : BaseController {  
12:        [InjectionConstructor]  
13:        public ApplyController(ISomeService service) {  
14:           SomeService = service;  
15:        }    

Explanation:

  • Line 10: A controller scope navigation tracker filter is declared, that looks after a domain object of type SomeApplication. It's function is really just to ensure that the object exists in the distributed cache we have
  • Lines 11-15: Use Unity to inject a service that is required by the controller

16:        [PageFlow]  
17:        [MappingFilter(TargetType = typeof(DetailsViewModel))]  
18:        public ActionResult Start() {  
19:           return View(ViewData.Model);  
20:        }   

The Start action is the first in the simple workflow we have.

Explanation:

  • Line 16: PageFlow is a simple attribute, not a filter. It is used to support previous/next behaviour. Decorating actions in this fashion allows target actions for the base controller implemented next and previous actions to be inferred automatically. As is seen later in the controller, you can specify an 'order' property, to note the sequence in the workflow where an action 'resides'
  • Line 17: Request that the current domain model (managed by the navigation tracker filter) be mapped into a view model of type DetailsViewModel.
Things get more interesting when types can be inferred, as below:
26:        [HttpPost]  
27:        [MappingFilter]   
28:        public ActionResult Details(DetailsViewModel details) {  
29:           if (ModelState.IsValid) return RedirectToAction("Confirmation");  
30:           return View(details);  
31:        }    

Explanation:

  • Line 26: Note that this is a POST request
  • Line 27: Request mapping of the DetailsViewModel object to the existing Domain object - we know both these types, so no specification of them is necessary in the mapping filter declaration 

37:        [HttpPost]  
38:        public ActionResult Confirmation(SummaryViewModel summary) {  
39:           SomeService.Order(GetTrackedHostedObject<SomeApplication>());  
40:           return RedirectToAction("Receipt");  
41:        }   

Explanation:

  • Line 37: This is the client confirming that they wish to proceed
  • Line 39: Use our Unity injected service to place an order, supplying the domain object we have been tracking and updating.
Again, most of this is quite straightforward. But the mapping filter is performing a number of actions behind the scenes, including:

GET requests
Request that the tracked domain object be mapped into the view model, and set the controllers model to the newly created and populated view model. All this occurs in the OnActionExecuted(...) override (well, not literally, as the mapping filter behaviour is split across a number of classes).

POST requests
Two distinct possibilities here:
  • OnActionExecuting(...):  if the filter has been told to examine the model state, and it is valid, use the mapping service to map the view model into the domain object, and update the distributed cache (with the modified domain object).
  • OnResultExecuting(...): if the model state is invalid, and we have to been told to care about that, ask the mapping service to execute all the 'pre-maps' of the view model, as the act of posting it back will have not done that. If a view model defines pre-maps (think of these as initialization actions), ask the mapping service to execute them on behalf of the view model. This means that the view model will then be in a self consistent state. 
This is a 'toe in the water' implementation at the moment, but it seems to have promise.

Saturday, July 7, 2012

Smalltalk inspired extensions for c#

Having developed in Smalltalk for about 3 years in the early '90's, I still regard the language fondly and am always slightly saddened that it never achieved mainstream adoption.Without Smalltalk, I don't believe I would have so 'easily' became moderately proficient in object oriented thought - note, not analysis or design, but literally thinking 'as an object'. Anthropomorphising is still something I engage in.

Philosophy aside, I amused myself by considering the behaviour of the Smalltalk boolean object a few weeks ago after a brief period of development in Squeak (see also below). You 'talk' to the boolean, and can effectively ask it to do something if it is true or false.

A simple example below, that writes a message to the Transcript window depending on the outcome of the test (which returns a boolean):

 a > b  
 ifTrue:[ Transcript show: 'greater' ]  
 ifFalse:[ Transcript show: 'less or equal' ]  

So, being perverse, what could I do to mimic this behaviour in c#, so I might be able to say:

 (a > b)  
   .IfTrue(() => Console.Write("greater"))  
   .IfFalse(() => Console.Write("less or equal"));  

It's obvious really - use an extension method on the System.Boolean type. This is shown below:

 public static class BoolExtension {  
          public static bool IfTrue(this bool val, Action action) {  
              if (val) action();  
              return val;  
          }  
          public static bool IfFalse(this bool val, Action action) {  
              if (!val) action();  
              return val;  
          }  
      }  

Please don't misinterpret - I'm not espousing this as a necessarily good idea, more demonstrating that extension methods allow one to 'fake' the presence of interesting constructs present in other languages.

From the squeak website:
Welcome to the World of Squeak!Squeak is a modern, open source, full-featured implementation of the powerful Smalltalk programming language and environment. Squeak is highly-portable - even its virtual machine is written entirely in Smalltalk making it easy to debug, analyze, and change. Squeak is the vehicle for a wide range of projects from multimedia applications, educational platforms to commercial web application development.

Saturday, February 25, 2012

TPL aware read/write streams for WCF streamed responses

I've recently needed to employ the WCF streamed response approach for a project. A caveat applied however - constructing the data to be streamed for a single request was relatively slow, and had the potential to be of sufficient magnitude to compromise the available physical memory of any one server in a cluster. And given that multiple requests could be 'in flight', a more reserved implementation seemed apposite.

An obvious mechanism became apparent. If the stream returned from the WCF service call was, server side, a read/write stream, I could connect the producer of data with the consumer (WCF framework, ultimately connected to the requesting client) and utilize standard synchronization primitives to control behaviour. Meaning, of course, we attempt to provide data to the client as soon as it is available and allow for 'arbitrary' amounts of data to be dispatched.

This translates to the WCF service call returning a System.IO.Stream, as expected, with the actual type being an implementation of (therefore) System.IO.Stream - my Read/Write stream.

Given the proposition that the producer of data to be consumed is 'slow' (where that means, some latency other than epsilon milliseconds), I'd need to allow for the consumer to be signalled that data was available - or  it would expend time spinning.

So, create a read/write stream, controlled by an auto reset event, that both the consumer and producer use. In this instance, the consumer is WCF, the producer is some particular task that is associated with the stream wrapper (an Action in fact). The wrapper, on calling Open(), shall return a Stream object, that is returned from the WCF call to the client, and is asynchronously written to, and responds as necessary to WCF demands (typically, Read(,,,)).  In this case, Read() is dependent on (see implementation) an AutoResetEvent.

It seems appropriate to commence with the stream implementation. It makes concrete the abstract class System.IO.Stream, and delegates read and write actions to a specialized 'buffer' type object, which uses the AutoResetEvent to ensure that read and write events occur as and when necessary (or reasonable). Most of this is filler, to properly implement the abstract class, the key methods being Read/Write.

1:    public class ReadWriteStream : Stream {  
2:      private bool mComplete;  
3:      public ReadWriteStream() : this(0) {  
4:      }  
5:      public ReadWriteStream(int syncBufferCapacity) {  
6:        SyncBuffer = new SynchronizedBuffer(syncBufferCapacity);  
7:      }  
8:      public bool Complete {  
9:        get {  
10:          return mComplete;  
11:        }  
12:        set {  
13:          mComplete = SyncBuffer.Complete = value;  
14:        }  
15:      }  
16:      public override bool CanRead {  
17:        get { return true; }  
18:      }  
19:      public override bool CanSeek {  
20:        get { return false; }  
21:      }  
22:      public override bool CanWrite {  
23:        get { return false; }  
24:      }  
25:      public override void Flush() {  
26:      }  
27:      public override long Length {  
28:        get {  
29:          return DBC.AssertNotImplemented<long>("This stream does not support the Length property.");  
30:        }  
31:      }  
32:      public override long Position {  
33:        get {  
34:          return DBC.AssertNotImplemented<long>("This stream does not support getting the Position property.");  
35:        }  
36:        set {  
37:          DBC.AssertNotImplemented<long>("This stream does not support setting the Position property.");  
38:        }  
39:      }  
40:      public override int Read(byte[] buffer, int offset, int count) {  
41:        return SyncBuffer.Read(buffer, offset, count);  
42:      }  
43:      public override long Seek(long offset, SeekOrigin origin) {  
44:        return DBC.AssertNotImplemented<long>("This stream does not support seeking");  
45:      }  
46:      public override void SetLength(long value) {  
47:        DBC.AssertNotImplemented<int>("This stream does not support setting the Length.");  
48:      }  
49:      public override void Write(byte[] buffer, int offset, int count) {  
50:        SyncBuffer.Write(buffer, offset, count);  
51:      }  
52:      public override void Close() {  
53:        if (!SyncBuffer.ContentAvailable && SyncBuffer.Complete)  
54:          SyncBuffer.Close();  
55:      }  
56:      private SynchronizedBuffer SyncBuffer { get; set; }  
57:    }  

Well, that's a lot of content, but the only interesting parts are the construction, Close() and Read(..) and Write(..). And Read/Write both use the SynchronizedBuffer concrete type - this is thus shown below:

1:   public class SynchronizedBuffer {  
2:    
3:    // TODO: This needs to be derived from the binding of the service calling this - can't find a way yet  
4:    private const int DefaultTimeOut = 15000;  
5:    private readonly AutoResetEvent mBufferEvent = new AutoResetEvent(false);  
6:    private bool mComplete;  
7:    
8:    internal SynchronizedBuffer(int syncBufferCapacity) {  
9:     TimeOut = DefaultTimeOut;  
10:     InitialBufferCapacity = syncBufferCapacity;  
11:     Buffer = new MemoryStream(syncBufferCapacity);  
12:    }  
13:    
14:    private MemoryStream Buffer { get; set; }  
15:    
16:    private int InitialBufferCapacity { get; set; }  
17:    
18:    internal int Read(byte[] buffer, int offset, int count) {  
19:       
20:     if (!Complete && !ContentAvailable &&  
21:      !mBufferEvent.WaitOne(TimeOut)) {  
22:       LogFacade.LogError(this, "Going to abend on auto reset event after timeout");  
23:       throw new ApplicationException("Timed out waiting for auto reset event");  
24:     }  
25:    
26:     int cnt;  
27:     lock (Buffer) {  
28:      cnt = Buffer.Read(buffer, offset, count);  
29:      if (Buffer.Length > InitialBufferCapacity)  
30:       Resize();  
31:     }  
32:     return cnt;  
33:    }  
34:    
35:    internal void Write(byte[] buffer, int offset, int count) {  
36:     lock (Buffer) {  
37:      long currentReadPosition = Buffer.Position;  
38:      Buffer.Seek(0, SeekOrigin.End);  
39:      Buffer.Write(buffer, offset, count);  
40:      Buffer.Seek(currentReadPosition, SeekOrigin.Begin);  
41:      mBufferEvent.Set();  
42:     }  
43:    }  
44:    
45:    private void Resize() {  
46:     long currentPosition = Buffer.Position;  
47:     long unread = Buffer.Length - currentPosition;  
48:     if (unread <= 0L)  
49:      unread = 0L;  
50:     else {  
51:      byte[] slice = new byte[unread];  
52:      Buffer.Read(slice, 0, (int)unread);  
53:      Buffer.Seek(0L, SeekOrigin.Begin);  
54:      Buffer.Write(slice, 0, (int)unread);  
55:      Buffer.Seek(0L, SeekOrigin.Begin);  
56:     }  
57:     Buffer.SetLength(unread);  
58:    }  
59:    
60:    internal int TimeOut { get; set; }  
61:    
62:    internal bool Complete {   
63:     get {  
64:      return mComplete;  
65:     }  
66:     set {  
67:      mComplete = value;  
68:      if (mComplete)   
69:       mBufferEvent.Set();  
70:     }  
71:    }  
72:    
73:    internal bool ContentAvailable {  
74:     get {  
75:      lock (Buffer) {  
76:       return Buffer.Length - Buffer.Position > 0;  
77:      }  
78:     }  
79:    }  
80:    
81:    internal void Close() {  
82:     if (!ContentAvailable) {  
83:      Buffer.Dispose();  
84:      Buffer = null;  
85:     }  
86:    }  
87:   }  
88:    

This very concrete type uses a memory stream to support read/write operations. The auto reset event is signalled by the write operation, and rather obviously, waited on by the read operation. A simple 'lock' statement preserves thread safety. The only point of interest is that of course multiple writes may occur before a read executes - which means that the memory stream is more difficult to control in terms of capacity. In the case of slowly consuming clients, this would need further thought - the current implementation uses a simplistic 'preserve unread' strategy.

Now we need to allow this read/write stream to be used by some arbitrary producer of data - our consumer is known to us, being the WCF infrastructure. So I defined a 'wrapper' class, which could do with renaming I admit, that accepts an Action that will be spun off in a TPL task - writing to the stream that the wrapper, well, wraps.

1:   public class ReadWriteStreamWrapper {  
2:    
3:    public ReadWriteStreamWrapper(Action<object> producer = null) {  
4:     Producer = producer;  
5:    }  
6:    
7:    public ReadWriteStreamWrapper SetSyncCapacity(int capacity = 0) {  
8:     BufferCapacity = capacity;  
9:     return this;  
10:    }  
11:    
12:    private int BufferCapacity { get; set; }  
13:    
14:    public ReadWriteStream Open() {  
15:     Stream = new ReadWriteStream(BufferCapacity);  
16:     AssociatedTask = Task.Factory.StartNew(Producer, Stream);  
17:     return Stream;  
18:    }  
19:     
20:    public ReadWriteStream Open<T>() where T : SinkWriter, new() {  
21:     Producer = sink => {  
22:            T writer = new T {   
23:               Sink = sink as ReadWriteStream   
24:            };  
25:            writer.Process();  
26:           };  
27:     return Open();  
28:    }   
29:    
30:    private Action<object> Producer { get; set; }  
31:    
32:    private ReadWriteStream Stream { get; set; }  
33:    
34:    public Task AssociatedTask { get; private set; }  
35:   }  

Points of note:

  • Lines 7-10: Fluent style method that sets the capacity of the underlying stream if desired
  • Lines 14-18: The 'business' so  to speak.Create a ReadWriteStream, create a task with the identified producer Action and supply the stream as the state passed to the Action. Finally, return the stream object, which will be the 'return' value for WCF
A WCF operation implementation would thus create a wrapper object, supply a producer Action, and return the object returned by Open(). When WCF attempts to read from the stream (our read/write stream) it will wait on the auto reset event, and one would hope that before the wait times out, the associated producer (running in a TPL task), actually writes some data to the stream - which sets the auto reset event, allowing a waiting read to proceed. All very simple in actuality.

You might notice a reference to a 'SinkWriter' - this is an abstract convenience class (note to self: review semantics of utterances - makes it appear as if the 'convenience' is abstract, rather than the class!). It's shown below for completeness:

1:  public abstract class SinkWriter {  
2:    
3:    public ReadWriteStream Sink { get; set; }  
4:    
5:    public void Process() {  
6:     try {  
7:      Execute();  
8:     }  
9:     catch (Exception ex) {  
10:      LogFacade.LogError(this, "Sink Writer failure", ex);  
11:     }  
12:     finally {  
13:      Sink.Complete = true;  
14:     }  
15:    }  
16:    
17:    protected abstract void Execute();  
18:    
19:   }  


Here is a pro-forma WCF operation implementation to illustrate tying it all together:

1:  public Stream CreateStreamedResponse(Request req) {  
2:     return new ReadWriteStreamWrapper(sink => {  
3:      IDocumentGenerator gen =   
4:       DocumentGenerationFacade.CreateGenerator(req);  
5:      new DelegatingSinkWriter(gen, req) {  
6:       Sink = sink as ReadWriteStream  
7:      } .Process();  
8:     })  
9:      .SetSyncCapacity(ServiceConstants.SynchronizedBufferSize)  
10:     .Open();  
11:    }  

We define the operation using the form expected for a streamed response; that is, by returning the stream object directly from the call. The wrapper takes some implementation of a sink writer (it doesn't matter awfully what that is for the sake of this blog), which, as discussed, accepts a read/write stream that it will write to, with WCF acting in the role of consumer. We set the capacity according to some configuration, and then call Open, which will create the RW stream, start the producer task - and let matters proceed.

One minor point - configuration. You need to ensure the binding for the WCF server changes the transfer mode - my simple example below:

1:  <bindings>  
2:     <basicHttpBinding>  
3:      <binding name="StreamBinding"   
4:                       transferMode="StreamedResponse"   
5:                       sendTimeout="00:00:15"   
6:                       maxBufferSize="2048"/>  
7:     </basicHttpBinding>  

This implementation has been 'battle' tested, and holds up well under pressure. And what is missing? If anyone reads this, it will be obvious! Care to really control a memory stream?

Saturday, January 14, 2012

MEF: Instantiating parts via meta data query

I quite like MEF; something about the simplicity of its approach appeals, certainly when compared to MAF.

Recently, I needed to be able to instantiate a part from an [ImportMany] collection, where the collection in question was part of a factory; that is, all parts were lazily held in a central location (the factory). The clumsy way of doing this would be to get the type of a part, and use Activator to instantiate it. But a slightly more elegant (to my mind) mechanism is to use meta data to locate a part, and use the composition container to create as necessary.

Thus the extensions shown below.Given a MEF container, you can ask it to instantiate a part or parts based on a Func that will be used to select eligible parts based on the decomposed meta data held by MEF (looking at the implementation now, I'd posit that I could optimise the LINQ).

 public static class MEFExtensions {  
    public static IEnumerable<TValue> CreateParts<TValue>(  
          this CompositionContainer container,  
          Func<IDictionary<string, object>, bool> selector) {     
       return container.Catalog.Parts.SelectMany(p =>   
          p.ExportDefinitions.Select(d =>   
             new Tuple<ComposablePartDefinition, ExportDefinition>(p, d))).     
          Where(tup => tup.Item2.ContractName == typeof(TValue).FullName &&  
                        selector(tup.Item2.Metadata)).     
          Select(t => (TValue)t.Item1.CreatePart().GetExportedValue(t.Item2));     
    }  
    public static TValue CreatePart<TValue>(     
          this CompositionContainer container,  
          Func<IDictionary<string, object>, bool> selector) {     
       return CreateParts<TValue>(container, selector).FirstOrDefault();  
    }  
 }  

As decomposed meta data in MEF is typed as an IDictionary, this next set of extensions can be used if a straight (and simple) comparison of dictionary associations is required. IOW, a selector might look similar to:

dict => dict.EqualKeysAndValues(externallySuppliedMetaData)

 public static class IDictionaryExtensions {  
    public static bool EqualKeysAndValues<TKey, TValue>(  
         this IDictionary<TKey, TValue> dict, IDictionary<TKey, TValue> rhs) {     
       return EqualKeysAndValues(dict, rhs, Enumerable.Empty<TKey>());  
    }  
    public static bool EqualKeysAndValues<TKey, TValue>(  
         this IDictionary<TKey, TValue> dict  
         IDictionary<TKey, TValue> rhs,   
         IEnumerable<TKey> exclusions) {  
       IDictionary<TKey, TValue> comp =   
          new Dictionary<TKey, TValue>().AddRange(  
               dict.Where(kvp => !exclusions.Contains(kvp.Key)));  
       return comp.Count() == rhs.Count() &&   
             !comp.Keys.Except(rhs.Keys).Any() &&   
          comp.All(kvp => rhs[kvp.Key].Equals(kvp.Value));  
    }  
    public static IDictionary<TKey, TValue> AddRange<TKey, TValue>(  
        this IDictionary<TKey, TValue> dict,   
        IEnumerable<KeyValuePair<TKey, TValue>> pairs) {  
       pairs.ToList().ForEach(kvp => dict[kvp.Key] = kvp.Value);  
       return dict;  
    }  
 }  

Saturday, September 17, 2011

False Interpreter 0.94

Been steadily working on the False interpreter, lambda support fully operational and tested on a number of examples from Rosetta Code. All relatively straightforward, and now making the hideous mistake of extending the command set, starting with stacking timers, used for coarse performance measurement, example below:

1:  { Ethiopian multiplication }  
2:  )stEM)[2/]h:[2*]d:[1&]o:[0[@$][$o;![@@\$@+@]?h;!@d;!@]#%\%]m:  
3:  "  
4:  Ethiopian, 17 and 34: "  
5:  17 34m;!.{578}  
6:  "  
7:  ")ct)  

Line 2 uses the start timer command ")st" and execution time is dumped to the console at Line 7 with )ct.

Saturday, September 10, 2011

[$1=$[\%1\]?~[$1-f;!*]?]f:

I've nearly finished a C# 4 interpreter for the FALSE language (http://strlen.com/false/false.txt) - current source code at: http://falseinterpreter.codeplex.com/ - a few more operators to go, but the Lambda support appears stable. It's not optimised at all as of yet, and could do with some serious efforts in that area!


FALSE code.....


Factorial: [$1=$[\%1\]?~[$1-f;!*]?]f:
Primes < 100:  99 9[1-$][\$@$@$@$@\/*=[1-$$[%\1-$@]?0=[\$.' ,\]?]?]#


Work inspired by G H Hardy:
"I have never done anything 'useful'. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world."

Friday, August 19, 2011

Adaptive contracts for WCF

Or WCF in the raw! With my current project, I decided it would be useful to let clients request only the objects/data they will actually consume, rather than impose a specific data contract. From a solutions architecture perspective, a set of WCF services are publicly exposed for consumption, accepting and returning JSON (for reasons of brevity, given predicted limited bandwidth).

So how best to do this? Well, some research turned up this MSDN blog post....and WCF raw it shall be. I also examined OData, in depth, and I concluded that it was still immature for production use, and required too much effort to make it truly lean.

So what does a client 'see' or 'do'? From my perspective, a client should tell me what 'domain' of objects it is interested in, and potentially indicate what 'zone' is of concern. For the domain of objects, it should be possible to note the 'name' of the object property of interest, and how the client wants to refer to it. For example, I might be interested in a composite (navigated) property of x.y.z, but I want to refer to it as q. Again, this is a way of minimising bandwidth consumption. It also implies that the implied spectrum of navigation routes is well known.

All code formatted using this excellent tool: http://codeformatter.blogspot.com/2009/06/about-code-formatter.html

So an implementation uses an XML file to capture this information, an abbreviated example is shown next:

 <?xml version="1.0" encoding="utf-8" ?>  
 <Templates>  
   <TemplateCollection zone="smartphone">  
     <Template name="Accounts">  
       <Field key="Name"/>  
       <Field key="CustomisedName"/>  
       <Field key="Number"/>  
       <Field key="FinancialData.Balance.Value"   
                  returnKey="Balance"/>  
       <Field key="FinancialData.Balance"   
                   returnKey="FormattedBalance"/>  
     </Template>  
   </TemplateCollection>  
 </Templates>  

And here is a JSON request a client might make:

 {"BespokeRequest":  
   { "Specification":   
    { "Zone":"smartphone", "Name":"Accounts" }  
   }  
 }  

So it's quite obvious that 'zone' identifies a collection of domain templates - and this request is interested in the 'Accounts' domain (as shown in the XML excerpt). For the object type that comprises this domain, there are obviously a number of properties in existence, and the 'smartphone' variant template just uses what it needs. This excerpt shows the composite property mapping in use, taking a navigation specification and returning it as a simple property:

 <Field key="FinancialData.Balance.Value"   
        returnKey="Balance"/>  

So the client wants to reference FinancialData.Balance.Value as Balance. However, as the point of this exercise is to allow arbitrary specification, this is also supported, using a JSON request similar to the following:

 {"BespokeRequest":  
  { "Specification":   
   { "Name":"Accounts" },   
  {"Definitions": [  
   { "Key":"Name","Value":""},  
   { "Key":"Number","Value":""},  
   { "Key":"FinancialData.Balance.Value","Value":"Balance"},  
   { "Key":"CustomisedName","Value":""},  
   { "Key":"FinancialData.Balance.Value","Value":"FormattedBalance"}  
  ]  
  }  
 }  

If a 'Value' is null or empty, the value of 'Key' in the associated object will have a JSON name that is the same as the 'Key' string.

The rationale for the template approach is to minimise bandwidth consumption as far as possible for in house applications.

Of course we have to be able to represent this in a WCF service. For raw service methods, the return type is System.IO.Stream - which signals to the host container that the service implementation is handling the 'formatting' of the response. In my case, a number of services will be 'adaptive', so there is a mixin interface, as below:

   [ServiceContract]  
   public interface IAdaptiveService {  
     [WebInvoke(Method = "POST",   
          UriTemplate = "BespokeRequest",   
          ResponseFormat = WebMessageFormat.Json,  
          RequestFormat = WebMessageFormat.Json,   
          BodyStyle = WebMessageBodyStyle.Wrapped)]  
     [OperationContract]  
     [return: MessageParameter(Name = "BespokeResponse")]  
     Stream BespokeRequest(  
             [MessageParameter(Name = "BespokeRequest")]  
             BespokeObjectListRequest request);  
   }  

Technically, specifying the response format is unnecessary, as raw services have complete control of this aspect. The message parameter attribute is useful though, allowing a generic name to be used for the actual request being passed in.

And here are the related data contracts and supporting objects - noting that, where it makes sense, data members are allowed to be absent on deserialization - this again means fewer bytes being sent 'over the wire'.

   [DataContract(Namespace = "Integration")]  
   public class BespokeObjectListRequest {  
     [DataMember(IsRequired = true)]  
     public TemplateSpecification Specification { get; set; }  
     [DataMember(IsRequired = false)]  
     public List<FieldSet> Definitions { get; set; }  
   }  
   
   [DataContract(Namespace = "Integration")]  
   public class FieldSet {  
     [DataMember]  
     public string Key { get; set; }  
     [DataMember(IsRequired = false)]  
     public string Value { get; set; }  
   }  
   
   [DataContract(Namespace = "Integration")]  
   public class TemplateSpecification {  
     [DataMember]  
     public string Zone { get; set; }  
     [DataMember]  
     public string Name { get; set; }  
   
     public bool IsValid() {  
       return !String.IsNullOrWhiteSpace(Zone)   
                  && !String.IsNullOrWhiteSpace(Name);  
     }  
   }  

A base type provides the implementation of the IAdaptiveService interface, shown below. The implementation uses a variant of centralized exception handling, that I described in a previous post.

1:  public Stream BespokeRequest(BespokeObjectListRequest request) {  
2:   return RawExecute<MemoryStream>(GenericReponseName,  
3:    new WrappingProcessor(() => {  
4:      DBC.AssertNotNull(request, "Null request can't be interrogated");  
5:      DBC.AssertNotNull(request.Specification,   
6:        "Null request specification can't be interrogated");  
7:      DBC.Assert(request.Specification.IsValid(), "The request specification is invalid");  
8:      string targetMethod = string.Format("Bespoke{0}",   
9:        request.Specification.Name);  
10:      MethodInfo info = GetType().GetMethod(targetMethod,  
11:        BindingFlags.NonPublic | BindingFlags.Instance);  
12:      DBC.AssertNotNull(info, string.Format("Method not found - {0}", targetMethod));  
13:      return (IProcessor) info.Invoke(this, null);  
14:      },  
15:      request));  
16:  }  

The RawExecute implementation is shown next:

1:  protected T RawExecute<T>(string baseName, IProcessor handler)   
2:    where T : Stream, new() {  
3:    WebOperationContext.Current.OutgoingResponse.ContentType = JSONContentType;  
4:    T stream = new T();  
5:    AssociationsWrapper wrapper = new AssociationsWrapper(baseName);  
6:    try {  
7:      CheckSessionStatus();  
8:      CheckOrganizationActive();  
9:      wrapper.Associations = handler.Process();  
10:    }  
11:    catch (Exception ex) {  
12:      LogFacade.LogFatal(this, "CAS failed", ex);  
13:      wrapper.Success = false;  
14:    }  
15:    finally {  
16:      wrapper.Write(stream);  
17:    }  
18:  return stream;  
19:  }  

There is a part II to this post - that provides more detail of interest.