Showing posts with label Javascript. Show all posts
Showing posts with label Javascript. Show all posts

Wednesday, September 18, 2024

How to use Recaptcha V3 with a nodejs AWS lambda

Overview

I have a business site (https://x3audio.com) that features a contact form. 

I'm using AWS Cloudfront to deliver the site from an S3 bucket and wanted to include a contact form. To help me do this, I decided to use a lambda function, exposed as a function URL. 

This function URL can be called by a Javascript integration in the web page when a user completes and submits a very simple contact form. Once the recaptcha token has been 'scored', the lambda uses SES to send me an email.

In the era of bots, spam engines and the like, I can't just naively expose the URL and 'hope' everything will be alright. Two security measures have been employed:

OAC was relatively easy to set up, but Recaptcha v3 proved a little problematic. But I finally got it operational, and this post shares some of the issues I encountered. If you follow these, you will at least get the basic V3 flow working (if you want to use the 'advanced' create assessment, this post does not cover that).

V3 has distinct client and server aspects. The client side is straightforward enough, but the server side was less so for me.

Get the token to the lambda

I needed to capture and pass the token that Recaptcha V3 creates when a form is submitted, to the backend lambda that I implemented. Two stages really, first the button that is embedded in the contact form:
 <button class="w-100 btn btn-lg btn-primary g-recaptcha"  
     data-sitekey="YOUR SITE KEY"  
     data-callback='onSubmit'  
     data-action='submit'  
     type="submit"  
     id="contactformbutton">  
     Send  
 </button>  
I'm using Bootstrap v5 so some of the markup is there. The "YOUR SITE KEY" can be found in the Recaptcha section in the google cloud console (you have to sign up to the V3 program) -- see below. 


The data-callback attribute in the button invokes a very simple piece of Javascript to GET the form to my lambda -- did not use POST as this got quite complicated quite quickly:

 async function onSubmit(token) {  
  const cfr = new Request("https://x3audio.com/contact?mx=" + mx   
                      + "&ma=" + ma + "&rem=" + rem +   
                     "&e=" + email);  
  cfr.method = "GET";  
  cfr.headers.append('x-v3token', token);  
  cfr.headers.append('x-v3token-length', token.length);  
  try {  
     const response = await fetch(cfr);  
Recaptcha V3 calls the onSubmit function and supplies the 'token' it has derived for the current users interaction with the page. This is the token we now pass to the AWS lambda for scoring (asking Google to score it via an https POST).

I'm using the web standard Fetch API, so I pass some data I want as query string parameters, but embed the V3 token in the request as a header (called x-v3token) and also set a header with the length of the token (x-v3token-length). 

This second header is not strictly necessary, but I wanted to check the size of the token at source and when received, as Cloudfront has a fairly obscure set of limits in play.

Use the Fetch API in the lambda to get a token scored

My AWS lambda is written in NodeJS, running in a Node 20.x runtime. So, for the recaptcha side of things, I need to extract the token from the headers of an inbound request, and ask Google to score them, using the Fetch API. 

Easy, right??

No. This caught me out. The Fetch API is available in nodejs 20.x, but the standard code editor in AWS cannot see it. To have it visible, you have to include this line at the top of your lambda:
 /*global fetch*/  
Once you do that, you can use the Fetch API easily. What follows is an abbreviated lambda, having just the useful bits documented:

 export const handler = async (event) => {  
  const obj = await assess(event); 
  const response = {
    statusCode: 200
  };
  return response;
 };  
This is the lambda entry point. Obviously you return a response with a status and possibly a body, but here I'm just omitting most of the implementation and showing the call to assess which will do the Recaptcha v3 scoring.

As below:
 
 async function assess(event) {   
  let obj = {   
   recaptcha_score: -1,  
   recaptcha_error_codes: [],  
   is_bot: true,  
   party: event["rawQueryString"],  
   source_ip: event.headers["x-forwarded-for"],   
   rc_v3_token: event.headers["x-v3token"],  
  };  
  try {  
   const rc_result = await checkToken(obj.rc_v3_token, obj.source_ip);  
   obj.recaptcha_score = rc_result.score;  
   obj.recaptcha_error_codes = rc_result.error_codes;  
   obj.is_bot = obj.recaptcha_score < 0.7;  
  }  
  catch (ex) {   
   console.log('Late exception: ' + ex, ex.stack);  
  }  
  return obj;  
 }  
So I set up an object that I will use to record the v3 score, whether it seems to be a bot and some other detail (the raw query string). I extract the v3 token from the headers, where it was set by the Javascript integration on my site (see above).

The event argument to the function is the http integration event received by the lambda.

There is a call to checkToken which is the function (below) that sends the token to Google for scoring and returns it to the assess function. 
 async function checkToken(token, ip) {  
     let score = -1;  
     let error_codes = [];  
     try {  
      const url = 'https://www.google.com/recaptcha/api/siteverify?secret=YOUR-SECRET-KEY&response=' + token;  
      let response = await fetch(url, { method: 'POST' });  
      const json = await response.json();  
      score= json.success ? json.score : -1;  
      error_codes = json.success ? [] : json["error-codes"];  
     }  
     catch (ex) {  
         console.log('Failed to check token: ' + ex, ex.stack);  
         error_codes = [ ex.toString() ];  
     }  
     return { score: score, error_codes: error_codes };  
 }  
The token argument is sent to the recaptch google endpoint (recaptcha/api/siteverify) along with the secret key of your Google cloud account. The response can then be inspected to see if it succeeded and what google thought of the user (based on their interaction with the site).

You must replace YOUR-SECRET-KEY with your own unqiue one. 

Can't find your secrete key? Nor could I, until I pressed Use Legacy key, see image:

 

Example result

Here is an example response from Google, showing a sucessful scoring request, what the score was (0.9, scale is 0.1 to 1.0) and so on.

 {  
  success: true,  
  challenge_ts: '2024-09-17T20:22:45Z',  
  hostname: 'x3audio.com',  
  score: 0.9,  
  action: 'submit'  
 }  

Wednesday, January 16, 2019

Angular versus Blazor - working SPA examples compared

Overview

Most of us have observed the ascent of Angular (in all its versions) over the last few years. I enjoy working with Angular, but it does feel on occasion that it complicates matters for little gain. Sure, compared to KnockoutJS and ExtJS it works very well, but something always nags a little.

I have been following the evolution of Blazor with interest. I won't describe it in detail, but its use of WASM, Mono and the ability to create an SPA with (mostly just) c#, is appealing. All the usual arguments in favour of such an approach apply. It's only an alpha framework, but I thought it might instructive/amusing to attempt to re-create an SPA I have using just Blazor, and compare the results.

The SPA

I have more than a passing interest in esoteric languages, and wrote one myself (WARP) for a laugh.

The SPA has these features:

  • Routing 
  • Use of MEF discovered language interpreters via a trivial.NET Core API
  • The ability to switch between languages 
  • Enter source code for a particular language that is dispatched to the API for execution
  • Respond to 'interrupts' received from the API, which signal that a user is required to enter input of some kind
  • Display output as it is received from the API execution of the source code supplied
  • The ability cancel execution if a program is slow (esoteric languages tend to be interpreted and seemingly simple tasks can be glacial in terms of execution speed)
  • Display a summary of the language as simple text
  • Provide an off site link to examine the language in greater detail
There is a project on GitHub with full source. Note that web sockets are used to communicate between client and server. Notes on building and running are at the end of this post.

Angular SPA
Angular 7 is used as the base framework,  using  the angular2-websocket module, which still seems the best for web sockets. It's all hosted in VS 2017, and uses ng build (not webpack or similar). It's reasonably straightforward.

Blazor SPA
Built with Blazor.Browser 0.7.0 (client) and Blazor.Server 0.7.0 (server). Given the 3 models of Blazor deployment, the one chosen is an ASP.NET Core model.


Screen grabs
A couple of screen grabs, noting that I did not attempt to make the UI's identical. The images show the execution of a prime number 'finder' written in WARP, both given a start point of 199.

Angular


Blazor



Differences
There are some subtle differences, aside from the not so subtle use of c# and Razor as opposed to Typescript and HTML.

Binding
The source code text area (see screen grabs below) should be an 'instant' binding, that is, any key press should affect the state of the Run button. If you have not entered source code, you can't run obviously, but as soon as you enter one character, that is possibly a viable esoteric program.

In Angular, using a plain form, it's easy enough, using ngModel, and required and disabled attributes:

 <div class="row">  
      <div class="col-12">  
           <textarea cols="80" rows="10"   
             [(ngModel)]="sourceCode" style="min-width: 100%;"   
             name="sourceCode" required [disabled]="running">  
           </textarea>  
         </div>  
 </div>  
 <p></p>  
 <div class="row">  
    <div class="col-12">  
      <button type="submit" class="btn btn-primary"   
         [disabled]="!executionForm.form.valid || running">  
            Run  
       </button>&nbsp;    
       <button type="button" class="btn btn-danger"   
           (click)="cancel()" [disabled]="!running">  
            Cancel  
        </button>    
      </div>  
  </div>   

It was almost as straightforward in Blazor, but with a quirk:

 <div class="row">  
     <div class="col-12">  
         <textarea cols="80" rows="10" bind="@SourceCode" style="min-width: 100%;"  
              name="sourceCode" required   
              onkeyup="this.dispatchEvent(new Event('change', { 'bubbles': true }));">   
         </textarea>  
     </div>  
 </div>  
 <p></p>  
 <div class="row">  
     <div class="col-12">  
         <button type="submit" class="btn btn-primary" onclick="@Run"   
               disabled='@(NotRunnable || Running)'>  
             Run  
         </button>&nbsp;  
         <button type="button" class="btn btn-danger"   
             disabled="@(Running == false)" onclick="@StopExecution">  
             Cancel  
         </button>  
     </div>  
 </div>  

Now the disabled attributes behaviour is fine, just a bit of Razor. But the part I didn't like or want is the addition of an onkeyup handler on the textarea. However, without this, the source code only updates when the textarea loses focus, which is not the behaviour that the Angular SPA has (and is the correct behaviour).

Attributes
If you are not used to Razor the attribute usage looks a little strange. It's also not semi abstracted in the way that Angular is (compare 'click' with 'onclick'). But I can't say that it bothers me that much.

Sharing code
These SPA's are very simple, and really only have one shared type across them, an object called LanguageMetadata (which is a simple data object that holds an example of a language that is supported by the ELTB service/API). With Blazor, I can share that between client and server, by having a separate class library project referenced by both of them. However, with Angular, I have to define an interface (well, I don't, but it is nicer to do so) - so I haven't shared anything, I have copied something.

For these SPA's, it's not a big deal. But for more complex projects (and I've worked on some) the possible sharing approach of Blazor could be exceptionally useful.

Http client
Angular makes a lot of noise about it's use of Rx and Observables - and yes, it is very appealing (just came off a project where Rx.NET was used heavily). Blazor can afford to take a different approach, using a 'standard' HttpClient with an async call.

It certainly has a more natural look and feel (excuse the hard coded URL's - it's just an example after all!):

Angular

  supportedLanguages() {  
   return this  
    ._http  
    .get(this.formUrl(false))  
    .pipe(map((data: any[]) => {  
     return <LanguageDescription[]>data  
    }));  
  }  

Blazor
 protected override async Task OnInitAsync() {  
     LanguageMetadata.All =   
        await httpClient.GetJsonAsync<List<LanguageMetadata>>   
            ("http://localhost:55444/api/EsotericLanguage/SupportedLanguages");  
 }  

When I look at it, the Ng approach with pipe and map just looks a little fussy.

Web sockets
Not all of the .Net API's you might want exist in Mono. One such is the web sockets API, which underpins the implementation of both versions of the SPA. I couldn't use something like SignalR (it is supported by Blazor), as I have distinct request/response semantics when user input is required for an executing piece of esoterica.

My understanding is that support is coming, but the Javascript interop of Blazor allowed me to solve the issue relatively quickly. Unfortunately, it meant writing some raw JS to do so, as below:

 window.websocketInterop = {  
     socket: null,  
     connect: function (url, helper, msg) {  
         console.log("Connecting");  
         socket = new WebSocket(url);  
         socket.onopen = function (evt) {  
             msg && socket.send(msg);  
         }  
         socket.onmessage = function (event) {  
             console.debug("WebSocket message received:", event);  
             helper.invokeMethod("OnMessage", event.data);  
         };  
         socket.onclose = function (evt) {  
             console.log("Socket closed. Notify this..");  
             helper.invokeMethod("OnChannelClose");  
         }  
         console.log("Connected and ready....");  
     },  
     send: function (msg) {  
         console.log("Sending:" + msg);  
         socket.send(msg);  
     },  
     close: function () {  
         console.log("Closing socket on demand");  
         socket && socket.close();  
     }  
 };  

(This is not anywhere near a production implementation).

The interop parts are seen in the cshtml file, InterpreterContent.cshmtl. For example, when the esoteric source code is sent (after pressing the Run button), it invokes the JS function 'webSocketInterop.connect' defined previously, sending it a url to connect to, a DotNetRefObject and the actual source code as the first message to dispatch on the web socket:

 async Task Run() {  
         Output = string.Empty;  
         Running = true;  
         await JSRuntime.Current.InvokeAsync<object>  
                ("websocketInterop.connect",   
                InterpreterServiceUrl,   
                new DotNetObjectRef(this),   
                $"|{Language}|{SourceCode}");  
         StateHasChanged();  
 }  

The DotNetRefObject encapsulates 'this' for this implementation, and allows the JS to call back into the 'this' instance. For example, when the socket is closed by the interpreter service (as it does when execution has completed),  the JS calls
 
             helper.invokeMethod("OnChannelClose");  
 
which is defined in the cshtml file as:

 
     [JSInvokable]  
     public void OnChannelClose() {  
         Running = false;  
         StateHasChanged();  
 }  

with JSInvokable making it available to JS, and when called, sets Running to false, which will update the UI such that the Run button is now enabled, and the Cancel button disabled. Note the use of StateHasChanged, which propagates state change notification.

It's a double edged sword - the interop is well done, simple, works. But it should be a feature that is used infrequently.

Source code organization
One of the frequent criticisms of the Razor world is that it lets you mix in code and HTML freely, giving it a somewhat 'classic ASP' feel if one is not careful. The SPA Blazor implementation is an example of that, I haven't attempted to make it modular or separate it out particularly.

But for established Razor shops, with good or reasonable practice, this is easy to address.

Less code
I definitely ended up with less code in the Blazor version. It's much easier to understand, builds quicker and means my c# knowledge can be used directly in the main. 

Unit testing
I didn't implement any unit tests for the purpose of this exercise, it's not destined for production after all. Angular et al have good tools in this area, Jasmine, Karma and so on. But Blazor allows for componentization which will support unit tests easily enough. Probably a draw in this regard.

Summary
Blazor is indeed an interesting concept; currently incomplete, not ready for production and a little slow on initial use. But the promise is there, but I suppose we'll have to wait and see if MS continue with it, because as many others have noted, this is the sort of project that can arrive with a muted fanfare, gain some traction and then disappear.

Being standards based helps its case, as the Silverlight debacle might illustrate. The considerable ecosystems of Angular, React and others might keep it at bay for a while if it makes it to full production use, but I think there is room for it.

Building and running from GitHub If you fancy building and running the examples, once cloned or downloaded from GitHub, and built - you then have to unzip the file API\ELTB-Services\interpreters-netcoreapp2.1.zip and move the assemblies therein to API\ELTB-Services\bin\Debug\netcoreapp2.1.

This is because the interpreter service relies on these to exist and be discoverable by MEF, and I didn't go to the trouble of fully integrating a build.

Friday, October 21, 2016

Angular 2: Creating decorators for property interception

As part of 'polishing' the esoteric languages testbed Angular 2 SPA, I thought it might be useful to allow for storage of entered source code to be auto-magically persisted. This lead me on a small journey into the ng2 decorator mechanisms, which are surprisingly easy to implement and reminiscent of c# attributes, but without the static limitations.

.Net Core MVC hosted solution on GitHub. Node package source also on GitHub.

The essence of the idea was to be able to decorate a property of a type and have any setting of its value to be automatically persisted - along with a suitable getter implementation.

Sort of as shown below, meaning both the language and sourceCode properties should be persistent. The @LocalStorage decoration implies strongly that this persistence should be in HTML 5 local storage.

1:  export class ExecutionComponent {  
2:    @LocalStorage('ELTB') language: string;  
3:    @LocalStorage('ELTB') sourceCode: string;
4:    programOutput = '';  
5:    programInput = '';  
6:    running = false;   
7:    inputRequired = false;  
8:    
9:    constructor(private _esolangService: EsolangService) {  
10:      console.log('built EC');  
11:    }  
12:  }  

So, how do you achieve this? There are plenty of detailed articles around for how to implement a decorator (at the class, property etc level), so I'm not going to describe it in detail.

It's easier just to present the code below, which has these main points of interest (note that this is aggregated code for presentation purposes from the node package source for this project):

  • Lines 2-7: Define an interface that represents the 'shape' of an object that can act as an interceptor for property gets and sets
  • Lines 9-14: Another interface, that defines the contract for an options type; one that can be passed as part of the decorator if it is required to achieve more finely grained behaviour, supply a factory for creating DelegatedPropertyAction instances and so on
  • Lines 16-35: the local storage decorator function entry point, that can be called with a union of types; either a string or an object that implements the AccessorOptions interface
  • Lines 37-39: a decorator function entry point for allowing general property interception e.g. as in @PropertyInterceptor('{ storagePrefix: "_", createJsonOverride: false}). An example is show later on.
  • Lines 41-82: A function that returns a function that implements the general property interception behaviour, with its behaviour directed somewhat by an instance of  AccessorOptions
  • Lines 85-113: An implementation of a DelegatedPropertyAction that gets and sets based on local storage


1:    
2:  export interface DelegatedPropertyAction {  
3:    propertyKey: string;  
4:    storageKey: string;  
5:    get(): any;  
6:    set(newValue: any): any;  
7:  }  
8:    
9:  export interface AccessorOptions {  
10:    storagePrefix?: string;  
11:    factory?(propertyKey: string, storageKey: string): DelegatedPropertyAction;  
12:    preconditionsAssessor?(): boolean;  
13:    createToJsonOverride?: boolean;  
14:  }  
15:    
16:  export function LocalStorage(optionsOrPrefix: string | AccessorOptions) {  
17:    function ensureConfigured(opts: AccessorOptions): AccessorOptions {  
18:      opts.preconditionsAssessor =  
19:        opts.preconditionsAssessor ||  
20:        (() => window.localStorage && true);  
21:      opts.factory =  
22:        opts.factory ||  
23:        ((p, c) => new LocalStorageDelegatedPropertyAction(p, c));  
24:      return opts;  
25:    }  
26:    return AccessHandler(  
27:      ensureConfigured(  
28:        typeof optionsOrPrefix === "string" ?  
29:        <AccessorOptions>{  
30:          storagePrefix: optionsOrPrefix,  
31:          createToJsonOverride: true  
32:          }  
33:          : optionsOrPrefix  
34:      ));  
35:  }  
36:    
37:  export function PropertyInterceptor(options: AccessorOptions) {  
38:    return AccessHandler(options);  
39:  }  
40:    
41:  function AccessHandler(options: AccessorOptions) {  
42:    return (target: Object, key?: string): void => {  
43:    
44:      function makeKey(key: string) {  
45:        return (options.storagePrefix || '') + '/' + key;  
46:      }  
47:    
48:      if (!options.preconditionsAssessor || options.preconditionsAssessor()) {  
49:    
50:        let privateName = '$__' + key, storeKey = makeKey(key);  
51:    
52:        target[privateName] = options.factory(key, storeKey);  
53:    
54:        Object.defineProperty(target, key, {  
55:          get: function () {  
56:            return (<DelegatedPropertyAction>this[privateName]).get();  
57:          },  
58:          set: function (newVal: any) {  
59:            (<DelegatedPropertyAction>this[privateName]).set(newVal);  
60:          },  
61:          enumerable: true,  
62:          configurable: true  
63:        });  
64:    
65:        const notedKey = '_notedKeys', jsonOverride = 'toJSON';  
66:    
67:        target[notedKey] = target[notedKey] || [];  
68:        target[notedKey].push(key);  
69:    
70:        options.factory(notedKey, makeKey(notedKey)).set(target[notedKey]);  
71:    
72:        if (options.createToJsonOverride && !target.hasOwnProperty(jsonOverride)) {  
73:          target[jsonOverride] = function () {  
74:            let knownKeys = Array<string>(target[notedKey]);  
75:            let result = { _notedKeys: knownKeys };  
76:            knownKeys.forEach(x => result[x] = target[x]);  
77:            return result;  
78:          };  
79:        }  
80:      }  
81:    }  
82:  }  
83:    
84:    
85:  export class LocalStorageDelegatedPropertyAction implements DelegatedPropertyAction {  
86:    
87:    storageKey: string;  
88:    propertyKey: string;  
89:    private val: any;  
90:    
91:    constructor(propertyKey: string, canonicalKey: string) {  
92:      this.propertyKey = propertyKey;  
93:      this.storageKey = canonicalKey;  
94:      this.val = JSON.parse(this.read());  
95:    }  
96:    
97:    get(): any {  
98:      return this.val;  
99:    }  
100:    
101:    set(newValue: any) {  
102:      this.write(JSON.stringify(newValue));  
103:      this.val = newValue;  
104:    }  
105:    
106:    private read() {  
107:      return localStorage.getItem(this.storageKey) || null;  
108:    }  
109:    
110:    private write(val: any) {  
111:      localStorage.setItem(this.storageKey, val);  
112:    }  
113:  }  

So, a contrived re-writing of the very first example, which adds no real value, could be:

1:  @LocalStorage('ELTB') language: string;  
2:  @LocalStorage({   
3:     storagePrefix: 'ELTB',   
4:     factory: (p, c) =>   
5:       new LocalStorageDelegatedPropertyAction(p, c) })   
6:    sourceCode: string;  

The solution on GitHub is a trivial test one, an example from its use is below, showing local storage contents mirroring the page content:


Tuesday, July 19, 2016

Angular JS 2, RxJs, ASP.NET Core, .NETStandard, Typescript, Web sockets - new Github project

In essence
Having had my 'head down' in a rather pressing commercial engagement, I've had little time to experiment with some of the .NET and UI framework ecosystem changes that have been occurring.

So I decided to combine a whole truck load of them into one effort, creating an ASP.NET Core webapp esoteric language testbed (based on my esoteric interpreters GitHub project).

There are some screen shots at the end of this post showing an example in action. It's definitely a WIP, not quite ready to put on GitHub.

Implemented:
  • Communicate with a REST API to determine what languages are available for use (supported languages are determined using a simple plugin system written with the version of MEF 2)
  • Accept a language source program
  • Use web sockets to request that the web app start remote interpretation, and allow client side 'interrupt' when the remotely executing program requires some user input
  • Have the execution page be fully contextual in terms of what is, and is not, permitted at any point

ASP.NET Core webapp
This was reasonably straightforward to put together and get to work. Things that did bite:
  • To work with AngularJS 2 RC2, the version of npm had to be a version different to the one shipped with VS 2015, meaning I had to fiddle with the external web tools to set the path
  • Initial restore of bower and npm packages took a long time, and there was little in the way of progress indication sometimes
  • Adding references to PCL's or .net standard assemblies often blew up the project.json file, resulting in duplicate Microsoft.NetCore.Platforms and Microsoft.NetCore.Targets dependencies that defeated package resolution. Editing project.json by hand cured this, but was not a pleasant experience
  • Running under IIS; what with creating an app pool running no managed code (IIS reverse proxying out to a Kestrel instance running in a different process) and then having to use the publish feature of VS to get it to work - I spent most of my time working with IIS express instead
  • Using a PCL as a reference in the web app causes all sorts of conniptions; VS 2015 still refuses to recognise the interfaces defined in a PCL of my own creation, and sometimes the build would fail. However, building using the .NET core command line tool (dotnet.exe) would cure this. Frustrating. 
AngularJS 2 
I never used Angular prior to v2. Never really had the opportunity, always seemed to be working in shops that used KnockoutJS (which is still a tidy library it must be said) or Ext JS (with its attendant steep learning curve).

Using it for this exercise was a pleasure. Sure, lots of set up issues, churn in the space, changes to routing discovered half way through, using Typescript (I know that is not mandatory!) - but all in all, positive.

There are a fair few components in the solution right now, but the key one is TestBedComponent, which in turn has two child components, LanguageComponent and ExecutionComponent - the first allowing the selection of a language to use for interpretation, with the languages being derived from calling an injected service, the second being responsible for the 'real' work:
  • Allowing the entry of an esoteric program
  • Using an injected service to request remote execution
  • Responding to interrupts from the remote execution, meaning user input is required - showing a text box and button to allow the entry of user input that is then sent via the service to the remote server
The TestBedComponent has this form:

 import { Component, EventEmitter, Input, Output, ViewChild } from "@angular/core";  
 import { LanguageComponent } from './language.component';  
 import { ExecutionComponent } from './execution.component';  
 @Component({  
   selector: "testbed",  
   template: `  
       <languageSelector (onLanguageChange)="languageChanged($event)"></languageSelector>  
       <execution></execution>      
   `,  
   directives: [LanguageComponent, ExecutionComponent]  
 })  
 export class TestBedComponent {  
   @ViewChild(ExecutionComponent)  
   private _executionComponent: ExecutionComponent;  
   currentLanguage: string;  
   languageChanged(arg) {  
     console.log('(Parent) --> Language changed to ' + arg);  
     console.log(this._executionComponent);  
     this._executionComponent.changeLanguage(arg);  
   }  
 }  

I'm just embedding the two main components in the template, and using a reference to the execution component to communicate a change in the selected language, which is handled by the LanguageComponent, which exposes an event emitter which the test bed component listens to.

There are other ways of doing this, such as using a shared service, but I wanted to experiment with as many different parts of Angular 2 as possible, rather than be a purist :-)

The language component uses an iterated Bootstrap row; it's (too) simple at the moment, but uses an ngFor to present a list of languages discovered after consulting a service, template excerpt as below:

 <div class="row">  
   <div class="col-xs-3" *ngFor="let language of languages">  
    <button class="btn btn-primary" (click)="languageChanged($event)">  
      {{language.Name}}  
    </button>  
   </div>  
 </div>  

The execution component is a little more interesting, having a more involved template, as below:

 <form (ngSubmit)="run()" #executionForm="ngForm">  
       <div class="row">  
         <div class="col-xs-12">  
           <h4>Source code</h4>  
         </div>  
         <div class="col-xs-12">  
           <textarea cols="80" rows="10" [(ngModel)]="sourceCode" style="min-width: 100%;"   
            name="sourceCode" required [disabled]="running"></textarea>  
         </div>  
       </div>   
       <p></p>  
       <div class="row">  
         <div class="col-xs-6">  
           <button type="submit" class="btn btn-primary" [disabled]="!executionForm.form.valid || running">  
            Run  
           </button>&nbsp;    
           <button type="button" class="btn btn-danger" (click)="cancel()" [disabled]="!running">  
            Cancel  
           </button>    
         </div>  
       </div>   
       <div class="row" *ngIf="inputRequired">  
         <div class="col-xs-12">  
           <h4>Input</h4>  
         </div>  
         <div class="col-xs-12">  
           <input [(ngModel)]="programInput" name="programInput" required/>  
           <button type="button" class="btn btn-primary" (click)="send()"   
             [disabled]="!executionForm.form.valid">  
           Send  
           </button>  
         </div>  
       </div>  
       </form>  
       <div class="row">  
         <div class="col-xs-12">  
           <h4>Output</h4>  
         </div>  
         <div class="col-xs-12">  
           <textarea cols="80" rows="10" [value]="programOutput" disabled style="min-width: 100%;"></textarea>  
         </div>  
       </div>  

As you can see, it uses a form, and a range of one and two way bindings, and a few ngIf's to control visibility depending on context.

The actual implementation of this component is also quite simple:
1:  export class ExecutionComponent {  
2:    language: string;  
3:    sourceCode: string;  
4:    programOutput = '';  
5:    programInput = '';  
6:    running = false;   
7:    inputRequired = false;  
8:    constructor(private _esolangService: EsolangService) {  
9:      console.log('built EC');  
10:    }  
11:    changeLanguage(lang) {  
12:      this.language = lang;  
13:      console.log(this.sourceCode);  
14:    }  
15:    run() {  
16:      console.log('Run! --> ' + this.sourceCode);  
17:      this.running = true;  
18:      this.programOutput = '';  
19:      this._esolangService.execute(  
20:        this.sourceCode,  
21:        {  
22:          next: m => this.programOutput += m,  
23:          complete: () => this.cancel()  
24:        },  
25:        () => this.inputRequired = true  
26:      );  
27:    }   
28:    send() {  
29:      console.log('Sending ' + this.programInput);  
30:      this._esolangService.send(this.programInput);  
31:      this.inputRequired = false;  
32:    }  
33:    cancel() {  
34:      this.running = this.inputRequired = false;  
35:      this._esolangService.close();  
36:    }   
37:  }  

Lines of interest:

  • 8 - EsoLangService is injected as a private member
  • 11 - target language is changed
  • 19-26 - the eso lang service is asked to execute the supplied source code. A NextObserver<any> is supplied as argument 2, and is hooked up internally within the service to a web sockets RxJs Subject (using https://github.com/afrad/angular2-websocket as a base). The third argument is a lambda that is called when the service receives a web socket message that indicates that user input is required. On receipt of this, inputRequired changes, which in turn affects this part of the template, displaying the user input text box and Send button:
<div class="row" *ngIf="inputRequired">

Screen shots
Just a few, with WARP as the target, executing a program that derives prime numbers from some user entered upper bound.

Initial page


Source entered

Execution started, but input required interrupt received

Execution complete










Saturday, June 4, 2016

Knockout Validation and ASP.NET MVC view model integration

I came across this particular issue a while ago, where I was using Knockout validation and had some validation annotated c# view models that were delivered by a REST API. I didn't have time on the project where I encountered this to solve it in an elegant fashion - so decided to do that in my spare time.

The problem to solve is to somehow have the validation attributes that have been applied to the C# view models applied in the same manner to view models created in the (Knockout JS based) client.

Doing this by hand is obviously clumsy and error prone, so instead the solution I now have:
  • Exposes Web API end points that can be queried by the client to gather meta data
  • Has a simple Javascript module that can interpret the response from a meta data serving endpoint call, applying Knockout validation directives (extensions) to a view model
The VS 2015 solution lives in GitHub.

A simple example follows - consider the C# view model below:

 public class SimpleViewModel {  
     [Required]  
     [StringLength(10)]  
     public string FirstName { get; set; }  
     [Required]  
     [StringLength(20)]  
     public string Surname { get; set; }  
     [Required(ErrorMessage = "You must indicate your DOB")]  
     [Range(1906, 2016)]  
     public int YearOfBirth { get; set; }  
     [RegularExpression(@"^[a-z]\d{3}$")]  
     public string Pin { get; set; }  
 }  

A web API method that can serve meta data on demand (security considerations ignored). It's all interface driven and pluggable, so not limited to the standard MVC data annotations or Knockout validation translation. Server side view model traversal is recursive and collection aware, so arbitrarily complex view models can be interrogated.


 public class DynamicValidationController : ApiController {  
     [HttpGet]  
     public dynamic MetadataFor(string typeName) {  
       return new ValidationMetadataGenerator()  
               .ExamineType(Type.GetType(typeName))  
               .Generate();  
     }  
 }  

And finally a very simple client use of the Javascript module. This example HTTP Get's a method that includes the validation meta data along with the view model in a wrapped type, but this need not be the case. The call to vmi.decorate, is the key one, applying as it does the relevant metadata to the ko mapped view model using standard Knockout validation directives.

 $.getJSON("/api/ViewModelServing/WrappedSimpleViewModel",   
          null,   
         function (response) {  
          var obj = ko.mapping.fromJS(response.Model);  
          vmi.decorate({  
            model: obj,  
            parsedMetadata: response.ValidationMetadata,  
            enableLogging: true,  
            insertedValidatedObservableName: 'validation'   
          });  
          ko.validation.init({ insertMessages: false });  
          ko.applyBindings(obj, $('#koContainer')[0]);  
       }  
 );  

The object passed to decorate or decorateAsync also allows you to supply a property name (insertedValidatedObservableName) that will be set with a validatedObservable created during metadata interpretation - this is a convenience, meaning that after the example code above executes, calling obj.validation.isValid() will return true or false correctly for the entire view model.

Metadata on the wire looks like this:


Saturday, October 20, 2012

Modelling a file system in Javascript using the Composite pattern

A rather peculiar post this one. I've been building a terminal application using the interesting JQuery Terminal.  As part of this application, I needed to model a file system that could be created dynamically, with the usual suspects involved - directories, files, paths, permissions.

For my purposes, I also required the implementation to be 'pluggable' - that is, where the directory information and file content was sourced would be supplied by external objects.

I considered a few options, but opted finally for the simplest - using the composite pattern. So I have this trivial set of abstractions:

FileSystemResource
FileResource < FileSystemResource
DirectoryResource < FileSystemResource

where DirectoryResource (1) --- (*) FileSystemResource.

To provide a central point of access, there is a file system manager type, which in the example code only allows one to find a resource (directory/file) or change the current directory.

So here is the code in its entirety, and at the end of the post, some simple test statements. The first few lines are some helper bits and pieces, before starting with the definition of a simple permission.

 jfs = {};  
   
 // Inheritance helper  
 jfs.extend = function (childClass, parentClass) {  
      childClass.prototype = new parentClass;  
      childClass.prototype.constructor = childClass;  
      childClass.prototype.parent = parentClass.prototype;  
 };  
   
 jfs.fsConfig = {  
      rootDirectory: '/',  
      pathDelimiter: '/',  
      parentDirectory: '..',  
      currentDirectory: '.'  
 };   
   
 // Util  
 Array.prototype.first = function (match, def) {  
      for (var i = 0; i < this.length; i++) {  
          if (match(this[i])) {  
              return this[i];  
          }  
      }  
      return def;  
 };  
   
 String.prototype.splitButRemoveEmptyEntries = function (delim) {  
      return this.split(delim).filter(function (e) { return e !== ' ' && e !== '' });  
 };  
    
 // Simple permissions group  
 jfs.fileSystemPermission = function (readable, writeable) {  
      this._readable = readable;  
      this._writeable = writeable;  
 };  
 jfs.fileSystemPermission.prototype.writeable = function () {  
      return this._writeable;  
 };  
 jfs.fileSystemPermission.prototype.readable = function () {  
      return this._readable;  
 };  
 jfs.fileSystemPermission.prototype.toString = function () {  
      return (this.readable() ? 'r' : '-').concat((this.writeable() ? 'w' : '-'), '-');  
 };  
 jfs.standardPermission = new jfs.fileSystemPermission(true, false);  
 // Base resource  
 jfs.fileSystemResource = function () {  
      this._parent = undefined;  
      this._tags = {};  
 };  
 jfs.fileSystemResource.prototype.init = function (name, permissions) {  
      this._name = name;  
      this._permissions = permissions;  
      return this;  
 };  
 // Return the contents of the receiver i.e. for cat purposes  
 jfs.fileSystemResource.prototype.contents = function (consumer) {  
 };  
 // Return the details of the receiver i.e. for listing purposes  
 jfs.fileSystemResource.prototype.details = function (consumer) {  
      return this.toString();  
 };  
 jfs.fileSystemResource.prototype.name = function () {  
      return this._name;  
 };  
 jfs.fileSystemResource.prototype.getParent = function () {  
      return this._parent;  
 };  
 jfs.fileSystemResource.prototype.getTags = function () {  
      return this._tags;  
 };  
 jfs.fileSystemResource.prototype.setParent = function (parent) {  
      this._parent = parent;  
 };  
 jfs.fileSystemResource.prototype.permissions = function () {  
      return this._permissions;  
 };  
 jfs.fileSystemResource.prototype.type = function () {  
      return '?';  
 };  
 jfs.fileSystemResource.prototype.find = function (comps, index) {  
 };  
 jfs.fileSystemResource.prototype.absolutePath = function () {  
      return !this._parent ? '' :   
         this._parent.absolutePath().concat(jfs.fsConfig.pathDelimiter, this.name());  
 };  
 jfs.fileSystemResource.prototype.toString = function () {  
      return this.type().concat(this._permissions.toString(), ' ', this._name);  
 };  
 // Directory  
 jfs.directoryResource = function () {  
      this.children = [];  
 };  
 jfs.extend(jfs.directoryResource, jfs.fileSystemResource);  
 jfs.directoryResource.prototype.contents = function (consumer) {  
      return '';  
 };  
 jfs.directoryResource.prototype.details = function (consumer) {  
      consumer('total 0');  
      this.applyToChildren(function (kids) { kids.forEach(function(e) { consumer(e.toString()); }) });  
 };  
 jfs.directoryResource.prototype.type = function () {  
      return 'd';  
 };  
 jfs.directoryResource.prototype.addChild = function (resource) {  
      this.children.push(resource);  
      resource.setParent(this);  
      return this;  
 };  
 jfs.directoryResource.prototype.applyToChildren = function (fn) {  
      return this._proxy && this.children.length == 0 ? this._proxy.obtainState(this, fn) : fn(this.children);  
 };  
 jfs.directoryResource.prototype.setProxy = function (proxy) {  
      this._proxy = proxy;  
 };  
 jfs.directoryResource.prototype.find = function (comps, index) {  
      var comp = comps[index];  
      var node = comp === '' || comp === jfs.fsConfig.currentDirectory ? this :  
                (comp === jfs.fsConfig.parentDirectory ? this.getParent() :   
                this.applyToChildren(function(kids) { return kids.first(function(e) { return e.name() === comp; }); }));  
      return !node || index === comps.length - 1 ? node : node.find(comps, index + 1);  
 };  
 // File  
 jfs.fileResource = function () {  
 };  
   
 jfs.extend(jfs.fileResource, jfs.fileSystemResource);  
 // consumer should understand:   
 // accept(obj) - accept content  
 // failed   - producer failed, totally or partially  
 jfs.fileResource.prototype.contents = function (consumer) {  
      this._producer(this, consumer || this._autoConsumer);  
 };  
   
 jfs.fileResource.prototype.type = function () {  
      return '-';  
 };  
   
 jfs.fileResource.prototype.plugin = function (producer, autoConsumer) {  
      this._producer = producer;  
      this._autoConsumer = autoConsumer;  
 };  
   
 // FSM  
 jfs.fileSystemManager = function () {  
      this._root = new jfs.directoryResource();  
      this._root.init('', jfs.standardPermission);  
      this._currentDirectory = this._root;  
 };  
   
 jfs.fileSystemManager.prototype.find = function (path) {  
      var components = path.splitButRemoveEmptyEntries(jfs.fsConfig.pathDelimiter);  
      if (components.length === 0) components = [ '.' ];  
      return (path.substr(0, 1) === jfs.fsConfig.rootDirectory ? this._root : this._currentDirectory).find(components, 0);  
 };  
   
 jfs.fileSystemManager.prototype.currentDirectory = function () {  
      return this._currentDirectory;  
 };  
   
 jfs.fileSystemManager.prototype.root = function () {  
      return this._root;  
 };  
   
 jfs.fileSystemManager.prototype.changeDirectory = function (path) {  
      var resource = this.find(path);  
      if (resource) this._currentDirectory = resource;  
      return resource;  
 };  

And the test code; it creates a directory under the root called 389, and adds a file (called TestFile) to that directory, plugging in an example 'producer' function (that knows how to get the content of this type of file object) and an auto consumer - that is, a default consumer attached to the object. It is possible to pass any object in when calling the contents function and to not use default consumers at all.

Finally, and for illustration only, we use the file system manager find function to get the actual resource denoted by the full path name, and ask it for its contents. As we have an auto consumer associated with the object, it executes. In this case, we would dump two lines to the console log; 'some' and 'content'.

1:  var m = new jfs.fileSystemManager();   
2:  var d = new jfs.directoryResource();   
3:  d.init('389', jfs.standardPermission);   
4:  m.currentDirectory().addChild(d) ;   
5:  var f = new jfs.fileResource();   
6:  f.init('TestFile', jfs.standardPermission);   
7:  d.addChild(f);   
8:  f.plugin(function(fileResource, consumer) {   
9:         ['some', 'content'].forEach(function(e) { consumer(e) })  
10:      },   
11:      function(e) { console.log(e) });  
12:  var r = m.find('/389/TestFile');  
13:  r.contents();  
14:    

Saturday, June 30, 2012

Sencha Touch 2: Models and proxies: associations - traversing and saving

It's difficult to locate reasonable information on using a proxy with a model that has associations - it's even more difficult to actually have it work, as (in a JSON context), the standard JSON writer of Sencha Touch 2 does not traverse associations. The extended JSON writer in this post will traverse model associations and also allows one to inject handlers for any required custom formatting - else isomorphism is assumed.

Semantically, I don't personally like the relationship between a model and a proxy, it is unnatural to my mindset, but I do admit it useful on occasion. So here I present a modified example, including an extension of the standard Ext.data.writer.Json. The code presented includes hard coded url's, names and so on, which the real code base does not - as I said, and example.

First, the model, including a proxy section, with the extended writer. The User model 'hasMany' membership details - yes, not a compelling example, but it serves for illustrative purposes:

1:  Ext.define('app.model.User', {  
2:    extend: 'Ext.data.Model',  
3:    config: {  
4:        fields: [  
5:           {name: 'id',   type: 'int'},  
6:           {name: 'firstName', type: 'string'},  
7:           {name: 'surname', type: 'string'},  
8:           {name: 'age', type: 'int'}  
9:        ],  
10:        hasMany: {  
11:           model: 'app.model.MembershipDetails',  
12:           name: 'details'  
13:        },  
14:        proxy: {  
15:           type: 'ajax',  
16:           url: '/createUser',  
17:           method: 'POST',  
18:           writer: {  
19:                 type: 'associationsWriter',  
20:                 root: 'UserCreationRequest',  
21:                 encodeRequest: true  
22:           }  
23:        }  
24:     }  
25:  });  

Line 19 defines the type of the writer, which is shown below:


1:  Ext.define('app.lib.data.AssociationsWriter', {  
2:     extend: 'Ext.data.writer.Json',  
3:     alias: 'writer.associationsWriter',  
4:     config: {  
5:        // Array of objects of type:   
6:       // { name: xxx processor: function(name, data, record) { xxx },   
7:        // retain: true/false/undef}  
8:        customFieldHandlers: [],  
9:        customAssociationHandlers: []  
10:     },  
11:     constructor: function() {  
12:        this.callParent(arguments);  
13:        this.depth = -1;  
14:     },  
15:     getRecordData: function(record) {  
16:        this.depth++;  
17:        var data = this.callParent(arguments);  
18:        (this.depth == 0 ?   
19:          this.processCustomFields(data, record) : this).  
20:            processAssociations(data, record);  
21:        this.depth--;    
22:        return data;  
23:     },  
24:     processCustomFields: function(data, record) {  
25:        this.getCustomFieldHandlers().forEach(function(e) {  
26:           e.processor(e.name, data, record);  
27:           if (!e.retain) delete data[name];  
28:        }, this);  
29:        return this;  
30:     },  
31:     processAssociations: function(data, record) {  
32:        record.getAssociations().each(function(ass) {  
33:           if (ass.getType() !== 'belongsto') {  
34:              var customHandler =   
35:                this.getCustomAssociationHandlers().first(  
36:                 function(e) { return e.name === ass.getName(); }  
37:                );  
38:              var handler = customHandler ?   
39:                customHandler.processor : this.standardAssociation;  
40:              var store = ass.getStore().apply(record, null);  
41:              store.each(function (rec) {  
42:                         handler.apply(this, [ ass.getName(), data, rec ] );  
43:           }, this);  
44:           }  
45:      }, this);  
46:     },  
47:     standardAssociation: function(name, data, rec) {  
48:        if (!data[name]) data[name] = [];  
49:      data[name].push(this.getRecordData.call(this, rec));  
50:     }  
51:  });  

The key override from the base type is getRecordData(Object).The code is fairly simple, the association handling function being processAssociations. As can be seen, a standard ST2 config section in this type allows you to associate custom field and association handlers if the normal behaviour does not suit requirements.


A little clumsily, line 33 handles the case where an association is encountered that should not be traversed. An associated object (membership details) includes a 'back association' with containing object, in this instance a model type of user. 


Also noteworthy is line 40 which gets the dynamically generated store that will hold the associations objects. As can be inferred, association.getStore() returns a function, which we call using the apply function.


MembershipDetails is included below - note the FK field reference to user, 'user_id' - the default name expected by Sencha if not explicitly specified:




1:  Ext.define('app.model.MembershipDetails', {  
2:    extend: 'Ext.data.Model',  
3:    config: {  
4:        fields: [  
5:           {name: 'id',   type: 'int'},  
6:           {name: 'user_id', type: 'int'},  
7:           {name: 'joinDate', type: 'date'},  
8:           {name: 'promoCode', type: 'string'}  
9:        ],  
10:        associations: { type: 'belongsTo',   
11:                        model: 'app.model.User' }  
12:     }  
13:  });  



Finally, a different version of the model with a custom field handler and custom association handler:


1:  Ext.define('app.model.User', {  
2:    extend: 'Ext.data.Model',  
3:    config: {  
4:        fields: [  
5:           {name: 'id',   type: 'int'},  
6:           {name: 'firstName', type: 'string'},  
7:           {name: 'surname', type: 'string'},  
8:           {name: 'age', type: 'int'}  
9:        ],  
10:        hasMany: {  
11:           model: 'app.model.MembershipDetails',  
12:           name: 'details'  
13:        },  
14:        proxy: {  
15:           type: 'ajax',  
16:           url: '/createUser',  
17:           method: 'POST',  
18:           writer: {  
19:                 type: 'associationsWriter',  
20:                 root: 'UserCreationRequest',  
21:                 encodeRequest: true,  
22:                 // Custom handler only required when there is not an   
23:                 // isomorphic relationship between the model and target  
24:                 customFieldHandlers: [  
25:                    { name: 'surname', processor: function(name, data, record) {  
26:                                      data['lastName'] = record.get(name);  
27:                                    }  
28:                    }  
29:                 ],  
30:                 customAssociationHandlers: [  
31:                    // Name here matches the name of the association defined above  
32:                    { name: 'details', processor: function(name, data, record) {  
33:                          data[key] = {  
34:                                'when' : record.get('joinDate'),  
35:                                'voucher' : record.get('promoCode')     
36:                             };  
37:                          }  
38:                    }  
39:                 ],  
40:              }  
41:      }  
42:     }  
43:  });  

Line 25 has a field handler that morphs the surname property, making it instead lastName in the generated JSON request. Line 32, takes the details association and changes the names of the joinDate and promoCode - of course, more sophisticated actions are possible. The default behaviour in the extended writer is to not further process or include in the generated request any fields or associations that are handled by custom handlers - but this can be overridden by including a 'retain: true' association  in the custom handler definition.

Saturday, December 3, 2011

Sencha Touch: Repeating tasks

And so, some trivia, dear reader (I tried to discover the etymology of 'dear reader' - and drew a blank). You may use Ext.defer on occasion, and perhaps even the DelayedTask class. But you need to fall back to 'raw' Javascript to create repeating tasks; so here is a trivial set of extensions to the Ext.util.Functions class that provide the ability to create repeating tasks, and some simple management of them. This use of base behaviour is not uncommon - how do you clear local storage? window.localstorage.clear().

 Ext.apply(Ext.util.Functions, {  
    repeat: function(taskName, fn, millis, zeroDayExecution) {
       this.tasks = this.tasks || {};  
       if (zeroDayExecution)  
          fn();  
       return this.tasks[taskName] = window.setInterval(fn, millis);  
   },  
   cancelRepeatingTask: function(taskName) {  
    if (this.tasks) {
      var id = this.tasks[taskName];  
      if (!Ext.isEmpty(id)) {  
         window.clearInterval(id);  
         delete this.tasks[taskName];  
      }
    }
   },  
   cancelAllRepeatingTasks: function() {  
    if (this.tasks)  
       Object.keys(this.tasks).forEach(function(key) { 
                                       this.cancelRepeatingTask(key); }, 
                                       this);         
   }  
 });  

Tuesday, November 15, 2011

Sencha Touch: Workflow framework published


I've been working on a workflow framework for Sencha Touch, after repeatedly being in the position of wanting to compose tasks in an intelligible, maintainable and flexible manner. It employs some simple OO abstractions, a nod towards separation of concerns, and the handy inbuilt event framework of Sencha. It also utilises JSinq, which I find a rather handy little library - although it is relatively easy to convert to not use it in case of concerns over download/package size.
The project is published on codeplex: http://mobileflow.codeplex.com/
Project overview (excerpt)
A workflow framework for Sencha Touch mobile apps including automatic component management. The intent is to start 'small', with a basic workflow engine, and build to include hydration/dehydration of workflow instances (using the memento pattern and local storage for example) and a visual designer and DSL style code generation.
Currently uses Sencha Touch 1.1, but will upgrade to 2.0 when out of the preview stage.

Sunday, October 23, 2011

Sencha Touch: Ext.Picker.Slot and the disappearing template

Ext.Picker is a useful component indeed. But recently, one aspect of an associated class's implementation made me wonder exactly why it was done in such a way. Specifically, the class implementation mounted an effective defense against using custom templates as the slot rendering mechanism, so I couldn't include images in the slot at all - which is exactly what I needed to do. Note that the behaviour inherited from DataView implies you can just supply a 'tpl' in configuration - but that just is not true.

(Note that the names of types and so on used in this post have been modified from the original implementation).

The class of interest is Ext.Picker.Slot, a private class and the default one for Ext.Picker 'slots'. Inside initComponent(), this assignment was the issue - forcing a standard template on you whether you like it or not:

 this.tpl = new Ext.XTemplate([  
       '<tpl for=".">',  
         '<div class="x-picker-item {cls} <tpl if="extra">x-picker-invalid</tpl>">{'   
          + this.displayField + '}</div>',  
          '</tpl>'  
     ]);  

There are a few ways of forcing the template you want in, but I chose to extend Ext.Picker.Slot, invoke the super type constructor, and then apply the template. As Sencha stress, Ext.Picker.Slot is a private class, so this can be regarded as a temporary measure, and not guaranteed to work with ST 2.0.

 m.ui.CustomSlot = Ext.extend(Ext.Picker.Slot, {  
   constructor: function(config) {  
     m.ui.CustomSlot.superclass.constructor.apply(this, arguments);  
     if (!Ext.isEmpty(config.template))   
        this.tpl = config.template;  
    }  
 });  
 Ext.reg('customslot', m.ui.CustomSlot);  

As part of the definition process, the type is 'xtype' registered with Sencha. Below is an example of the template I need to apply:

 m.ui.pickerSourceObjectTpl = new Ext.XTemplate(  
 '<tpl for=".">',  
 '<div class="x-picker-item {cls} <tpl if="extra">x-picker-invalid</tpl>">',  
 '<div class="m-item-image">{[this.getImageElement(values)]}</div>',  
 '<div class="m-item-info">',  
 '<p class="m-name header">{[this.getDisplayName(values)]}</p>',  
 '</div>',   
 '</div>',   
 '</tpl>',  
 {  
   getImageElement: function(obj) {  
    return m.ui.renderingSupport.getImageHTMLElement(obj);  
   },  
   getDisplayName: function(obj) {  
    return !Ext.isEmpty(obj.CustomisedName) ?   
        obj.CustomisedName : obj.Name;  
   }  
 });  

As I have some very specific behaviour associated with the picker implementation, I extended Ext.Picker. Note that there is a custom css class associated with the picker (class 'm-picker') - this is important for the actual operation of the picker in 'proper template' mode, as we need to set the attributes of the picker bar properly, or it looks rather strange.

1:  m.sencha.views.CustomPicker = Ext.extend(Ext.Picker, {  
2:    cls: 'm-picker',  
3:    defaultType: 'customslot',  
4:    constructor: function(config) {  
5:      m.sencha.views.CustomPicker.superclass.constructor.apply(this, arguments);  
6:      this.currentSource = '';  
7:      this.currentTarget = '';  
8:    },  
9:    filter: function() {  
10:      m.sencha.stores.sources.filterBy(function(rec, id) {  
11:        return m.domain.currentSession.isValidSource(rec.get('Number'));  
12:      });  
13:      this.slots[0].setSelectedNode(0);  
14:    },  
15:    listeners: {  
16:      pick: function(picker, obj, slot) {  
17:        this.currentSource = this.dispatchChangeToController({  
18:           action: 'sourceChanged',  
19:           selection: obj.Source,  
20:           cached: this.currentSource  
21:        });  
22:        this.currentTarget = this.dispatchChangeToController({  
23:           action: 'targetChanged',  
24:           selection: obj.Target,  
25:           cached: this.currentTarget  
26:        });  
27:      }  
28:    },  
29:    dispatchChangeToController: function(options) {  
30:     if (options.cached != options.selection && !Ext.isEmpty(options.selection)) {  
31:          Ext.dispatch({  
32:            controller: m.sencha.controllers.activeDispositionController,  
33:            action: options.action,  
34:            number: options.selection,  
35:            context: !Ext.isDefined(this.context) ? 'self' : this.context  
36:          });  
37:       }    
38:       return options.selection;  
39:    }  
40:  });  

Part of the SASS defintion for the custom css class is shown below - note that I am in no way a SASS or css expert, and I can claim no responsibility for what is excerpted below:

 .x-sheet.m-picker{  
      top:0 !important;  
      height:$picker-row-height*2 !important;  
      .x-picker-mask{  
          .x-picker-bar{  
              background:none;  
              border-top:rgba(0,0,0,0.2) 1px solid;  
              border-bottom:rgba(0,0,0,0.2) 1px solid;  
              @include box-shadow(rgba(0,0,0,0.1) 0 0 4px 0);  
          }  
      }  

Finally, we create slots to place in the picker sub type, at another place in code. Here, we just use the registered xtype, supply some base slot properties, and use the custom template property to specify our desired template.

1:  var slots = [{  
2:        xtype: 'customslot',  
3:        name: 'Source',  
4:        store: m.sencha.stores.sources,  
5:        valueField: 'Number',  
6:        template: m.ui.pickerSourceObjectTpl,  
7:        displayField: 'Name'  
8:      },  
9:      {  
10:        xtype: 'customslot',  
11:        name: 'Target',  
12:        store: m.sencha.stores.targets,  
13:        valueField: 'Number',  
14:        template: m.ui.pickerTargetObjectTpl,  
15:        displayField: 'Name'  
16:      }];  

Saturday, October 15, 2011

Dynamically building and destroying Sencha Touch components

This item targets Sencha Touch 1.1 - with Sencha Touch 2.0 on the horizon, it will be interesting to see if any of this might be better addressed.

Following some of the advice on optimising a Sencha Touch application, I decided it would be useful to build components on demand and handle their lifecycle in a centralized fashion. Further, encapsulate this behaviour in a controller base type, and retain references to dynamically constructed components statically, so all controller sub types can deal with components consistently. (As should be apparent, this example code is culled from an MVC implementation).

The implementation in its entirety follows, then a brief segmented explanation of the salient points.

1:  m.sencha.controllers.BaseController = Ext.extend(Ext.Controller, {  
2:    /*  
3:     * options.viewportProperty, options.buildFunction, options.name,   
4:     * options.callback, options.maskText  
5:    */  
6:    buildOnDemand: function(options) {  
7:      var requiresBuild = !Ext.isDefined(options.viewportProperty) ||   
8:                options.viewportProperty.isDestroyed;  
9:      if (!requiresBuild) {  
10:        this.activateBuiltComponent(options);  
11:      }  
12:      else {  
13:        var enclosingController = this;  
14:        m.ui.platformFactory.showMask(  
15:           Ext.isDefined(options.maskText) ? options.maskText : 'Loading...',  
16:           function(mask) {  
17:              var compConfig = options.buildFunction();  
18:              options.viewportProperty = compConfig[options.name];  
19:              m.sencha.views.viewport.applyAndAdd(compConfig, options.name);  
20:              mask.hide();  
21:              m.sencha.controllers.BaseController.  
22:                demandBuiltComponents[options.name] = compConfig[options.name];  
23:              enclosingController.activateBuiltComponent(options);  
24:        },  
25:        m.config.shortMaskDelay  
26:       );  
27:      }  
28:    },  
29:    activateBuiltComponent: function(options) {  
30:      m.sencha.views.viewport.setActiveItem(options.viewportProperty);  
31:      if (options.callback)   
32:        options.callback(options.viewportProperty);  
33:    },  
34:    cleanup: function() {  
35:     for(var compName in m.sencha.controllers.BaseController.demandBuiltComponents)   
36:        this.destroyAsNecessary(compName);  
37:    },  
38:    destroyAsNecessary: function(name) {  
39:     var target =   
40:       m.sencha.controllers.BaseController.demandBuiltComponents[name];  
41:     if (Ext.isDefined(target) &&   
42:       (!Ext.isFunction(target.isDestroyed) || !target.isDestroyed))   
43:        m.sencha.views.viewport.remove(target, true);  
44:     m.sencha.controllers.BaseController.demandBuiltComponents[name] = null;    
45:    },  
46:    constructor: function() {  
47:      m.sencha.controllers.BaseController.superclass.constructor.apply(this, arguments);    
48:    }  
49:  }); 
50:  m.sencha.controllers.BaseController.demandBuiltComponents = {};  

To  the important aspects of the implementation:

1:  m.sencha.controllers.BaseController = Ext.extend(Ext.Controller, {  
2:    /*  
3:     * options.viewportProperty, options.buildFunction, options.name,   
4:     * options.callback, options.maskText  
5:    */  
6:    buildOnDemand: function(options) {  
7:      var requiresBuild = !Ext.isDefined(options.viewportProperty) ||   
8:                options.viewportProperty.isDestroyed;  
9:      if (!requiresBuild) {  
10:        this.activateBuiltComponent(options);  
11:      }  
12:      else {  
13:        var enclosingController = this;  
14:        m.ui.platformFactory.showMask(  
15:           Ext.isDefined(options.maskText) ? options.maskText : 'Loading...',  
16:           function(mask) {  
17:              var compConfig = options.buildFunction();  
18:              options.viewportProperty = compConfig[options.name];  
19:              m.sencha.views.viewport.applyAndAdd(compConfig, options.name);  
20:              mask.hide();  
21:              m.sencha.controllers.BaseController.  
22:                demandBuiltComponents[options.name] = compConfig[options.name];  
23:              enclosingController.activateBuiltComponent(options);  
24:        },  
25:        m.config.shortMaskDelay  
26:       );  
27:      }  
28:    },    

  • Line 6 declares the build function, intended to be used by clients to request an object is constructed and managed by the controller statically. An 'options' object is expected, with the options as noted in the comment.
  • Lines 7-9: If the component exists and is not marked as destroyed,  just activate it
  • Lines 13-25: Create a 'closure' style reference to the executing controller, and invoke an Ext deferred task to show a mask while the component is built (this is performed using a local 'platform' object). A deferred task is used as failing to 'yield' for a short period of time may cause the loading mask to not show at all.
  • Lines 17-20: Execute the client supplied build function to create a new instance of the desired component, and have the view port (a panel with a card layout) add the component to itself. Hide the mask.
  • Lines 21-22: Record the component in a static object shared by all controller sub types
  • Line 23: Activate the component 
 
29:    activateBuiltComponent: function(options) {  
30:      m.sencha.views.viewport.setActiveItem(options.viewportProperty);  
31:      if (options.callback)   
32:        options.callback(options.viewportProperty);  
33:    },    
  • Line 30: Have the view port activate the just built or pre-existing component
  • Lines 31-32: If the options supplied include a callback function to be executed after component activation, call it now
Now some general cleanup and management behaviour. Line 34 (following) declares a sweeping cleanup function, line 38 a function to destroy a named component. Note that we examine the object to see if the isDestroyed property is defined by the component instance - this property is dynamic, in that it does not exist for a component unless it has been destroyed.

34:    cleanup: function() {  
35:     for(var compName in m.sencha.controllers.BaseController.demandBuiltComponents)   
36:        this.destroyAsNecessary(compName);  
37:    },  
38:    destroyAsNecessary: function(name) {  
39:     var target =   
40:       m.sencha.controllers.BaseController.demandBuiltComponents[name];  
41:     if (Ext.isDefined(target) &&   
42:       (!Ext.isFunction(target.isDestroyed) || !target.isDestroyed))   
43:        m.sencha.views.viewport.remove(target, true);  
44:     m.sencha.controllers.BaseController.demandBuiltComponents[name] = null;    
45:    },  
46:    constructor: function() {  
47:      m.sencha.controllers.BaseController.superclass.constructor.apply(this, arguments);    
48:    }  
49:  }); 
50:  m.sencha.controllers.BaseController.demandBuiltComponents = {};  

Moderately tidy and (ironically enough) in need of some optimisation itself.