Friday, December 6, 2024

Sepultus Deus - Pestilence EP

As the liner notes portend, the first and final EP release by Sepultus Deus (IOW, me for the most part).

I mastered all tracks with Ozone 11 advanced, and reckon I have done quite a reasonable job of it. Most of the drum tracks were programmed with a step sequencer, which was a good and mostly positive experience.

All in all, a release I am happy with. Best to leave on a high note. 

The end of Sepultus Deus is come.

Friday, November 29, 2024

Philosophy of death

I recently read an excellently written article in Psyche, How not to fear your death, by Sam Dresser, an editor at Aeon/Psyche. It summarises Epicurean views on death, which are thought provoking if not always agreeable. 

Additionally, the article includes some excellent links on alternate/competing philosophies, and a link to Shelly Kagan's (Yale) free philosophy course, succintly entitled "Death" 😄

Well worth a look if you wish to contemplate death with some useful (if somewhat abstract) references to guide your thought. 

Wednesday, September 18, 2024

Limewashing with Resene paints

We had some 'classic' wallpaper in our lounge, which we could not wait to remove. The lounge itself is quite large, with high ceilings, so a block colour would be quite dull.

We decided to limewash instead, going for that rustic, time worn look. This post describes how we achieved the affect. I'm documenting it here in case someone may find it useful -- we found contradictory or confusing advice on the Resene site, so I've tried to keep this very plain.

Initial state

We had already plastered walls, and had done some light repairs and skimming. This was then sanded and washed down with sugar soap. 

Materials

I estimate that we covered about 45 square metres with the measurements we give here. Also note that the colours noted are our choices 😀

  • 2x cutting in brushes (rats tail or similar)
  • 1 wide but not deep paint brush (10cm wide by about 1.25cm deep)
  • A roll of stockinette (10 m) from Mitre 10 or some muslin (say 3 m) from a store
  • Plenty of drop sheets
  • 1 litre of Resene FX Paint Effects Medium
  • 4 litres waterborne white primer
  • 4 litres of half duck egg blue (the base coat)
  • 250ml of Inside Back (the color used to tint the FX Paint Effects Medium)
  • 1 roller and a few roller heads
  • 250ml bottle of Resene Hot Weather additive
Step 1: Prime
  • Cut in to a depth of about 5-7 cm
  • Roll first primer coat, let dry
  • Roll second primer coat, let dry
  • (Optional) Roll 3rd primer coat, let dry
After 3 coats:

Step 2: Apply base coat
  • Cut in to a depth of about 5-7 cm
  • Roll first base coat (half duck egg blue), let dry
  • Roll second base coat coat, let dry
Base coat applied:


Step 3: Mix limewash, apply and rag off
The mix ratios are important here, get them wrong (or fail to remember them) and you'll just waste paint (and time). 

For the effect we show in the following pictures, our mix was this:

  • 80% FX Paint Effects Medium (the medium)
  • 10% Inside Back paint  (the tint)
  • 10% Hot weather additive (lets you do the work without rushing)
So, for 1L tin of FX Paint Effects medium, you'd add 125ml of paint (Inside back) and 125ml of Hot weather additive. You want darker or lighter, adjust the ratio of the tint. 

Make sure is stirred very well.

Technique
The basic technique is that someone criss/cross paints the wall, and another person follows on behind with scrunched up muslin or stockinette and 'rags' off the paint that has just been put on. 

Ragging in our case was a 'twisting' motion of the hand, with some other artistic flourishes thrown in.

To ensure you don't get in each others way, the 'criss crosser' can have about a 3-7 minute start. The limewash will not dry quickly (due to the addition of the hot weather additive).

When criss crossing, make sure the entire wall is covered with paint, but don't get too fussy if some bits look lighter than others, just as long as there is paint to rag off.

Caveat: don't leave a wall half done. We did it in stages, but always completed a wall.

Here is an image of criss crossing in progress. It looks pretty untidy and uneven, but that is exactly what you want. You can see the currently unpainted base coat to the right of the image.

Now just imagine the other person ragging off that paint, creating the desired effect. Note that however it looks when you have finished, it will get lighter by a "few shades" when dried. 

Here are some pictures of the final results - up close some parts look a bit marbled, others look like clouds, and in one place, there appears to be a caricature of Alfred Hitchcock 😁 We only did one coat in the end, suited the look we were going for.




How to use Recaptcha V3 with a nodejs AWS lambda

Overview

I have a business site (https://x3audio.com) that features a contact form. 

I'm using AWS Cloudfront to deliver the site from an S3 bucket and wanted to include a contact form. To help me do this, I decided to use a lambda function, exposed as a function URL. 

This function URL can be called by a Javascript integration in the web page when a user completes and submits a very simple contact form. Once the recaptcha token has been 'scored', the lambda uses SES to send me an email.

In the era of bots, spam engines and the like, I can't just naively expose the URL and 'hope' everything will be alright. Two security measures have been employed:

OAC was relatively easy to set up, but Recaptcha v3 proved a little problematic. But I finally got it operational, and this post shares some of the issues I encountered. If you follow these, you will at least get the basic V3 flow working (if you want to use the 'advanced' create assessment, this post does not cover that).

V3 has distinct client and server aspects. The client side is straightforward enough, but the server side was less so for me.

Get the token to the lambda

I needed to capture and pass the token that Recaptcha V3 creates when a form is submitted, to the backend lambda that I implemented. Two stages really, first the button that is embedded in the contact form:
 <button class="w-100 btn btn-lg btn-primary g-recaptcha"  
     data-sitekey="YOUR SITE KEY"  
     data-callback='onSubmit'  
     data-action='submit'  
     type="submit"  
     id="contactformbutton">  
     Send  
 </button>  
I'm using Bootstrap v5 so some of the markup is there. The "YOUR SITE KEY" can be found in the Recaptcha section in the google cloud console (you have to sign up to the V3 program) -- see below. 


The data-callback attribute in the button invokes a very simple piece of Javascript to GET the form to my lambda -- did not use POST as this got quite complicated quite quickly:

 async function onSubmit(token) {  
  const cfr = new Request("https://x3audio.com/contact?mx=" + mx   
                      + "&ma=" + ma + "&rem=" + rem +   
                     "&e=" + email);  
  cfr.method = "GET";  
  cfr.headers.append('x-v3token', token);  
  cfr.headers.append('x-v3token-length', token.length);  
  try {  
     const response = await fetch(cfr);  
Recaptcha V3 calls the onSubmit function and supplies the 'token' it has derived for the current users interaction with the page. This is the token we now pass to the AWS lambda for scoring (asking Google to score it via an https POST).

I'm using the web standard Fetch API, so I pass some data I want as query string parameters, but embed the V3 token in the request as a header (called x-v3token) and also set a header with the length of the token (x-v3token-length). 

This second header is not strictly necessary, but I wanted to check the size of the token at source and when received, as Cloudfront has a fairly obscure set of limits in play.

Use the Fetch API in the lambda to get a token scored

My AWS lambda is written in NodeJS, running in a Node 20.x runtime. So, for the recaptcha side of things, I need to extract the token from the headers of an inbound request, and ask Google to score them, using the Fetch API. 

Easy, right??

No. This caught me out. The Fetch API is available in nodejs 20.x, but the standard code editor in AWS cannot see it. To have it visible, you have to include this line at the top of your lambda:
 /*global fetch*/  
Once you do that, you can use the Fetch API easily. What follows is an abbreviated lambda, having just the useful bits documented:

 export const handler = async (event) => {  
  const obj = await assess(event); 
  const response = {
    statusCode: 200
  };
  return response;
 };  
This is the lambda entry point. Obviously you return a response with a status and possibly a body, but here I'm just omitting most of the implementation and showing the call to assess which will do the Recaptcha v3 scoring.

As below:
 
 async function assess(event) {   
  let obj = {   
   recaptcha_score: -1,  
   recaptcha_error_codes: [],  
   is_bot: true,  
   party: event["rawQueryString"],  
   source_ip: event.headers["x-forwarded-for"],   
   rc_v3_token: event.headers["x-v3token"],  
  };  
  try {  
   const rc_result = await checkToken(obj.rc_v3_token, obj.source_ip);  
   obj.recaptcha_score = rc_result.score;  
   obj.recaptcha_error_codes = rc_result.error_codes;  
   obj.is_bot = obj.recaptcha_score < 0.7;  
  }  
  catch (ex) {   
   console.log('Late exception: ' + ex, ex.stack);  
  }  
  return obj;  
 }  
So I set up an object that I will use to record the v3 score, whether it seems to be a bot and some other detail (the raw query string). I extract the v3 token from the headers, where it was set by the Javascript integration on my site (see above).

The event argument to the function is the http integration event received by the lambda.

There is a call to checkToken which is the function (below) that sends the token to Google for scoring and returns it to the assess function. 
 async function checkToken(token, ip) {  
     let score = -1;  
     let error_codes = [];  
     try {  
      const url = 'https://www.google.com/recaptcha/api/siteverify?secret=YOUR-SECRET-KEY&response=' + token;  
      let response = await fetch(url, { method: 'POST' });  
      const json = await response.json();  
      score= json.success ? json.score : -1;  
      error_codes = json.success ? [] : json["error-codes"];  
     }  
     catch (ex) {  
         console.log('Failed to check token: ' + ex, ex.stack);  
         error_codes = [ ex.toString() ];  
     }  
     return { score: score, error_codes: error_codes };  
 }  
The token argument is sent to the recaptch google endpoint (recaptcha/api/siteverify) along with the secret key of your Google cloud account. The response can then be inspected to see if it succeeded and what google thought of the user (based on their interaction with the site).

You must replace YOUR-SECRET-KEY with your own unqiue one. 

Can't find your secrete key? Nor could I, until I pressed Use Legacy key, see image:

 

Example result

Here is an example response from Google, showing a sucessful scoring request, what the score was (0.9, scale is 0.1 to 1.0) and so on.

 {  
  success: true,  
  challenge_ts: '2024-09-17T20:22:45Z',  
  hostname: 'x3audio.com',  
  score: 0.9,  
  action: 'submit'  
 }  

Saturday, July 15, 2023

Sidechain compression with Cakewalk

I use the Cakewalk DAW (Digital Audio Workstation) for my musical "experiments". For one composition recently, I wanted to have a "cheesy" DJ intro and a "You've been listening to..." outro. Both of these voice parts would be over the playing song, so I needed to the track itself to "quiet down" when a voice part was active. 

I thought about using volume automation, but that's clumsy and the song itself has quite a few tracks. So it became obvious that using side chain compression was going to be the best approach.

So I've got n tracks that should lower in volume (get compressed) when a voice track is active. It's actually easy to do in Cakewalk once you know how!

First, create a new stereo bus with a compressor in the fx bin (I used the standard Sonitus compressor), and send it out to the master bus. I called it sidechain (perhaps because I lack imagination 😄).

Next, ensure that all tracks and other buses that you want to be compressed are routed through the new sidechain bus. Here's an image of bus to bus, the "Rhythm" and "Bass" buses routed to "sidechain":

Here's an image for some tracks, some of which go direct to the sidechain bus, others routed through intermediate buses - "solo break" goes direct, "Rhythm-R" goes via the "Rhythm" bus, which then goes to the "sidechain" bus: 


The key to making it work is making the the voice track an input to the compressor on the stereo "sidechain" bus. Cakewalk exposes the sidechain input of the compressor as a "send". So if I click on the "Send" plus sign in the voice track, and select the compressor sidechain input menu item, this sets up that association.



As you can see, the voice track goes straight to the master bus. Finally, a few tweaks of the compressor to change the threshold, attack and release times, ratio and knee, and you're done:


There is another way to add a compressor to the bus, without using an existing VST, and it may give more pleasing results depending on your circumstances. The Pro channel 'rack' associated with all tracks and buses allows the addition of a PC4K-S-Type compressor, which has a side chain enable switch; see image below, you can enable that compressor and send the controlling track to its input.





Tuesday, October 4, 2022

Rock Dog: a JUCE multi faceted VST3 plugin

I have been using JUCE  for a while now, and have built a few plugins, including a "multi faceted" one (this post) and a DSP impulse response processing one, which is not yet released.

Rock Dog I did have as the subject of a Kickstarter campaign, which unfortunately failed to raise the funds I wanted to help improve it further -- I needed some financial injection to allow me to start modelling physical (non linear response) hardware (still open to funding of course!).  

The original Kickstarter campaign video is here.

With Rock Dog I tried to make it a more interesting plugin, by including features I didn't see often discussed in JUCE forums, including:

  • Providing a range of features and ensuring that soft real time DSP requirements were met
  • UI themes and run time switching between them
  • Saving and loading named presets based on current plugin state
  • Multiple switchable distortion fx
  • Serially chained reverb (if activated in the plugin)
  • Combine the use of juce::dsp modules as well as custom algorithm implementations
  • Reading/Writing to the file system in a system independent manner
  • Use of "space saving" context menus
I hadn't used C++ for quite some time (JUCE is only C++) so that took a while to reacquaint myself with, but it wasn't too tortuous. As a committed amateur musician, using the Cakewalk DAW, I have found it quite pleasing to use my own plugins within the context of a DAW.

The only thing I have not managed to do yet is build a MacOS version - when I get around to uploading the project into my public github repo then I'll have a crack at a github action for that. There is a Windows version available.

And to finish, some screenshots.

Standard theme plugin:


 

















And with a different theme activated:




















Loading a preset you'd previously saved:



















Changing a loaded distortion effect:




Saturday, May 21, 2022

atan mini processor

 void AtanMiniProcessor::processBlock(juce::AudioBuffer<float>& buffer, int inputChannels, int outputChannels, ProcessParameters& p) {  
   ScopedNoDenormals noDenormals;  
   for (auto i = inputChannels; i < outputChannels; ++i)  
     buffer.clear(i, 0, buffer.getNumSamples());  
   for (int channel = 0; channel < inputChannels; ++channel) {  
     auto* channelData = buffer.getWritePointer(channel);  
     for (int sample = 0; sample < buffer.getNumSamples(); sample++) {  
       float cleanSig = *channelData;  
       *channelData *= p.drive * p.range;  
       *channelData = (((((2.0f / float_Pi) * atan(*channelData)) * p.blend) + (cleanSig * (1.0f - p.blend))) / 2) * p.volume;  
       channelData++;  
     }  
   }  
 }  

Saturday, April 24, 2021

The (garden) bed of Theseus

I suppose it's not even random, just contextual. 

We have some raised garden beds, notionally described as chattels GB-1 and GB-2. GB-2 has been built to a less than desirable standard, with the original builder not bothering to line the interior with polythene or other suitable barrier material to avoid internal "bed rot".

As some of the panels were rotted through and become friable, I had occasion to break out some tools, old fence palings and patchwork fix the bed, as shown in the picture. It's not a stunningly professional job, but it does the trick and in reality the vegetables aren't going to care.



And, as I sawed and hammered away, the philosophical condundrum that is The Ship Of Theseus sprang to mind. Has GB-2 retained its identity?

In terms of chattels, utility, occupied dimensions, GB-2 is in all respects unchanged at the macro level. However, it looks different, weighs slightly more, has more nails, screws, so therefore is technically a different object.

Has GB-2 retained its identity? Yes, with four dimensional theory applied. But in all senses practical, not really. I find much of the issue to be confounded by the application of frames of reference that are by their very nature incomplete, unsuitable or so broadly swept as to obscure rather than illuminate. Once we lapse into the truly metaphysical, I fear poor GB-2 may be consigned almost to a non existence.

Even every day chores may bring philosophical wonder. If Bertrand Russell could obsess over the place of a table in philosophy, nature, mind and others, then GB-2 has as much right to be considered.

Saturday, January 23, 2021

Brevity or obfuscation in c#

Having a look at the Blazorise github source, I encountered this method:

     protected override string FormatValueAsString( IReadOnlyList<TValue> value )  
     {  
       if ( value == null || value.Count == 0 )  
         return string.Empty;  
       if ( Multiple )  
       {  
         return string.Empty;  
       }  
       else  
       {  
         if ( value[0] == null )  
           return string.Empty;  
         return value[0].ToString();  
       }  
     }  
Doesn't that seem like a lot of code for some simple behaviour ? I've seen development styles like this before, often with the argument that it's easy to read. I suppose that is true. 

But it also misses the point -- you are not using the c# language and libraries to their full extent...this does:
     protected override string FormatValueAsString(IReadOnlyList<TValue> values)  {  
          var result = values?.FirstOrDefault()?.ToString();  
          return result == null || Multiple ? string.Empty : result;  
     }  
It's shorter, loses no meaning, uses LINQ acceptably and has one return statement instead of four. I also took the liberty of removing the strange spacing around the method argument and naming it more appropriately. And using the K&R statement styling. 

However, at the expense of some readability, but with a better performance profile, you could write:
     protected override string FormatValueAsString(IReadOnlyList<TValue> values)    
         =>  Multiple ? string.Empty : values?.FirstOrDefault()?.ToString() ?? string.Empty;  

If I was being picky:
  • I'd have a left a QA comment asking if the method name was really appropriate -- it's borderline IMO. 
  • The shorter implementations allow for the possibility that the ToString() method of TValue might return null (a defect in that case) - you can't discount that as a possibility, and it would possibly break the caller of the method
  • An engineering approach might include a pre-condition that 'values' cannot be null
  • A post condition would be that the return result is always not null
  • The use of 'Multiple' looks a little forced - without delving further, could this test even be avoided altogether and be outside?
I'm very much a fan of internal as well as external quality. 

Monday, September 21, 2020

Badass Bass II Bridge

I've got a well set up Fender Jazz Bass (US); nice low action, totally true neck - in short, a veritable slap and pop monster. But I always thought the stock Fender bridges were a bit cheap and flimsy looking, and not fit for the high quality instruments that we know Fenders are.

So I recently had a Badass Bass II bridge fitted, by Weta Guitars. And what did I expect? Many pundits suggest longer sustain, "efficient sound coupling" (whatever that is supposed to be) and improved balance. It's a drop in replacement, so I suppose you could fit one yourself, but I didn't want to take the risk.

About a year in, I must admit I'm not noticing a huge difference. You feel the additional heft at the bridge end of the bass of course, but only for a bit as you acclimatise. Where it does work for me is appearance - it just looks the part. Perhaps this is a case of form over function.



 

Thursday, July 30, 2020

Azure Cosmos DB - Partition keys


General
Just come out of a gig where Cosmos DB was the persistence engine of choice, using the SQL API. If you don't know much about Cosmos, see here.

Partition keys
One of the architectural decisions made by Microsoft confuses me. They have a notion of a logical partition, which can have a maximum size of 10GB. It is expected that your Cosmos DB usage, assuming non trivial, has to arrange for objects/documents to be partitioned across multiple logical containers.

Therein lies the rub. Cosmos DB won't do any of this partitioning for you, it is entirely up to you to arrive at some satisfactory scheme, which involves your implementation generating a partition key that potentially reflects some guidelines that Microsoft share.

For the domain I was in, a number of external organisations submitted many documents to the client, and these submissions would be distributed over a number of years, and easily exceed the 10GB logical partition limit. One of the key guidelines from Microsoft is to avoid 'hot partitions' - that is, a partition that gets used heavily to the exclusion of almost any other. This has quite serious performance implications.

So, given we don't want hot partitions, that rules out using a partition key that uses the year for any submission, as there is a strong locality of reference in play - that is, the external organisations tend to focus on the most recent temporal activity and hence Cosmos action would tend to focus on one partition for a year!

In the end, knowing that each external organisation had a unique 'organisation number', and using a sequence/modulo scheme, an effective partitioning approach was implemented. It's operation is simple, and works as below:

  • An external organisation submits a JSON document via a REST API
  • On receipt, a Cosmos stored document is found or created based on the organisation number
  • This document has a sequence number and a modulo. We calculate sequence mod modulo.
  • We increment the sequence, and save the organisation specific document
  • We now have a pro forma partition key, for organisation 7633, we might have: 7633-1, 7633-2 and so on

What this provides is for bounded yet not meaningfully limited partition counts. By judicious selection of modulo (in the case of my client, this was an integer), scalability is "assured". 

Wednesday, January 16, 2019

Angular versus Blazor - working SPA examples compared

Overview

Most of us have observed the ascent of Angular (in all its versions) over the last few years. I enjoy working with Angular, but it does feel on occasion that it complicates matters for little gain. Sure, compared to KnockoutJS and ExtJS it works very well, but something always nags a little.

I have been following the evolution of Blazor with interest. I won't describe it in detail, but its use of WASM, Mono and the ability to create an SPA with (mostly just) c#, is appealing. All the usual arguments in favour of such an approach apply. It's only an alpha framework, but I thought it might instructive/amusing to attempt to re-create an SPA I have using just Blazor, and compare the results.

The SPA

I have more than a passing interest in esoteric languages, and wrote one myself (WARP) for a laugh.

The SPA has these features:

  • Routing 
  • Use of MEF discovered language interpreters via a trivial.NET Core API
  • The ability to switch between languages 
  • Enter source code for a particular language that is dispatched to the API for execution
  • Respond to 'interrupts' received from the API, which signal that a user is required to enter input of some kind
  • Display output as it is received from the API execution of the source code supplied
  • The ability cancel execution if a program is slow (esoteric languages tend to be interpreted and seemingly simple tasks can be glacial in terms of execution speed)
  • Display a summary of the language as simple text
  • Provide an off site link to examine the language in greater detail
There is a project on GitHub with full source. Note that web sockets are used to communicate between client and server. Notes on building and running are at the end of this post.

Angular SPA
Angular 7 is used as the base framework,  using  the angular2-websocket module, which still seems the best for web sockets. It's all hosted in VS 2017, and uses ng build (not webpack or similar). It's reasonably straightforward.

Blazor SPA
Built with Blazor.Browser 0.7.0 (client) and Blazor.Server 0.7.0 (server). Given the 3 models of Blazor deployment, the one chosen is an ASP.NET Core model.


Screen grabs
A couple of screen grabs, noting that I did not attempt to make the UI's identical. The images show the execution of a prime number 'finder' written in WARP, both given a start point of 199.

Angular


Blazor



Differences
There are some subtle differences, aside from the not so subtle use of c# and Razor as opposed to Typescript and HTML.

Binding
The source code text area (see screen grabs below) should be an 'instant' binding, that is, any key press should affect the state of the Run button. If you have not entered source code, you can't run obviously, but as soon as you enter one character, that is possibly a viable esoteric program.

In Angular, using a plain form, it's easy enough, using ngModel, and required and disabled attributes:

 <div class="row">  
      <div class="col-12">  
           <textarea cols="80" rows="10"   
             [(ngModel)]="sourceCode" style="min-width: 100%;"   
             name="sourceCode" required [disabled]="running">  
           </textarea>  
         </div>  
 </div>  
 <p></p>  
 <div class="row">  
    <div class="col-12">  
      <button type="submit" class="btn btn-primary"   
         [disabled]="!executionForm.form.valid || running">  
            Run  
       </button>&nbsp;    
       <button type="button" class="btn btn-danger"   
           (click)="cancel()" [disabled]="!running">  
            Cancel  
        </button>    
      </div>  
  </div>   

It was almost as straightforward in Blazor, but with a quirk:

 <div class="row">  
     <div class="col-12">  
         <textarea cols="80" rows="10" bind="@SourceCode" style="min-width: 100%;"  
              name="sourceCode" required   
              onkeyup="this.dispatchEvent(new Event('change', { 'bubbles': true }));">   
         </textarea>  
     </div>  
 </div>  
 <p></p>  
 <div class="row">  
     <div class="col-12">  
         <button type="submit" class="btn btn-primary" onclick="@Run"   
               disabled='@(NotRunnable || Running)'>  
             Run  
         </button>&nbsp;  
         <button type="button" class="btn btn-danger"   
             disabled="@(Running == false)" onclick="@StopExecution">  
             Cancel  
         </button>  
     </div>  
 </div>  

Now the disabled attributes behaviour is fine, just a bit of Razor. But the part I didn't like or want is the addition of an onkeyup handler on the textarea. However, without this, the source code only updates when the textarea loses focus, which is not the behaviour that the Angular SPA has (and is the correct behaviour).

Attributes
If you are not used to Razor the attribute usage looks a little strange. It's also not semi abstracted in the way that Angular is (compare 'click' with 'onclick'). But I can't say that it bothers me that much.

Sharing code
These SPA's are very simple, and really only have one shared type across them, an object called LanguageMetadata (which is a simple data object that holds an example of a language that is supported by the ELTB service/API). With Blazor, I can share that between client and server, by having a separate class library project referenced by both of them. However, with Angular, I have to define an interface (well, I don't, but it is nicer to do so) - so I haven't shared anything, I have copied something.

For these SPA's, it's not a big deal. But for more complex projects (and I've worked on some) the possible sharing approach of Blazor could be exceptionally useful.

Http client
Angular makes a lot of noise about it's use of Rx and Observables - and yes, it is very appealing (just came off a project where Rx.NET was used heavily). Blazor can afford to take a different approach, using a 'standard' HttpClient with an async call.

It certainly has a more natural look and feel (excuse the hard coded URL's - it's just an example after all!):

Angular

  supportedLanguages() {  
   return this  
    ._http  
    .get(this.formUrl(false))  
    .pipe(map((data: any[]) => {  
     return <LanguageDescription[]>data  
    }));  
  }  

Blazor
 protected override async Task OnInitAsync() {  
     LanguageMetadata.All =   
        await httpClient.GetJsonAsync<List<LanguageMetadata>>   
            ("http://localhost:55444/api/EsotericLanguage/SupportedLanguages");  
 }  

When I look at it, the Ng approach with pipe and map just looks a little fussy.

Web sockets
Not all of the .Net API's you might want exist in Mono. One such is the web sockets API, which underpins the implementation of both versions of the SPA. I couldn't use something like SignalR (it is supported by Blazor), as I have distinct request/response semantics when user input is required for an executing piece of esoterica.

My understanding is that support is coming, but the Javascript interop of Blazor allowed me to solve the issue relatively quickly. Unfortunately, it meant writing some raw JS to do so, as below:

 window.websocketInterop = {  
     socket: null,  
     connect: function (url, helper, msg) {  
         console.log("Connecting");  
         socket = new WebSocket(url);  
         socket.onopen = function (evt) {  
             msg && socket.send(msg);  
         }  
         socket.onmessage = function (event) {  
             console.debug("WebSocket message received:", event);  
             helper.invokeMethod("OnMessage", event.data);  
         };  
         socket.onclose = function (evt) {  
             console.log("Socket closed. Notify this..");  
             helper.invokeMethod("OnChannelClose");  
         }  
         console.log("Connected and ready....");  
     },  
     send: function (msg) {  
         console.log("Sending:" + msg);  
         socket.send(msg);  
     },  
     close: function () {  
         console.log("Closing socket on demand");  
         socket && socket.close();  
     }  
 };  

(This is not anywhere near a production implementation).

The interop parts are seen in the cshtml file, InterpreterContent.cshmtl. For example, when the esoteric source code is sent (after pressing the Run button), it invokes the JS function 'webSocketInterop.connect' defined previously, sending it a url to connect to, a DotNetRefObject and the actual source code as the first message to dispatch on the web socket:

 async Task Run() {  
         Output = string.Empty;  
         Running = true;  
         await JSRuntime.Current.InvokeAsync<object>  
                ("websocketInterop.connect",   
                InterpreterServiceUrl,   
                new DotNetObjectRef(this),   
                $"|{Language}|{SourceCode}");  
         StateHasChanged();  
 }  

The DotNetRefObject encapsulates 'this' for this implementation, and allows the JS to call back into the 'this' instance. For example, when the socket is closed by the interpreter service (as it does when execution has completed),  the JS calls
 
             helper.invokeMethod("OnChannelClose");  
 
which is defined in the cshtml file as:

 
     [JSInvokable]  
     public void OnChannelClose() {  
         Running = false;  
         StateHasChanged();  
 }  

with JSInvokable making it available to JS, and when called, sets Running to false, which will update the UI such that the Run button is now enabled, and the Cancel button disabled. Note the use of StateHasChanged, which propagates state change notification.

It's a double edged sword - the interop is well done, simple, works. But it should be a feature that is used infrequently.

Source code organization
One of the frequent criticisms of the Razor world is that it lets you mix in code and HTML freely, giving it a somewhat 'classic ASP' feel if one is not careful. The SPA Blazor implementation is an example of that, I haven't attempted to make it modular or separate it out particularly.

But for established Razor shops, with good or reasonable practice, this is easy to address.

Less code
I definitely ended up with less code in the Blazor version. It's much easier to understand, builds quicker and means my c# knowledge can be used directly in the main. 

Unit testing
I didn't implement any unit tests for the purpose of this exercise, it's not destined for production after all. Angular et al have good tools in this area, Jasmine, Karma and so on. But Blazor allows for componentization which will support unit tests easily enough. Probably a draw in this regard.

Summary
Blazor is indeed an interesting concept; currently incomplete, not ready for production and a little slow on initial use. But the promise is there, but I suppose we'll have to wait and see if MS continue with it, because as many others have noted, this is the sort of project that can arrive with a muted fanfare, gain some traction and then disappear.

Being standards based helps its case, as the Silverlight debacle might illustrate. The considerable ecosystems of Angular, React and others might keep it at bay for a while if it makes it to full production use, but I think there is room for it.

Building and running from GitHub If you fancy building and running the examples, once cloned or downloaded from GitHub, and built - you then have to unzip the file API\ELTB-Services\interpreters-netcoreapp2.1.zip and move the assemblies therein to API\ELTB-Services\bin\Debug\netcoreapp2.1.

This is because the interpreter service relies on these to exist and be discoverable by MEF, and I didn't go to the trouble of fully integrating a build.

Thursday, April 27, 2017

Auto generating asp.net core OData v4 controllers from an entity framework code first model

Even though I am no great fan of OData (leaky abstractions and all), I found myself in the position of thinking how I could make it work with asp.net core and entity framework core (there are many posts around that say it cannot be done).

The project that eventuated from these thoughts is on Github.

I fell back on T4 again, as especially for REST API's created with OData in the asp.net world, you'll likely have an entity framework code first model. And writing controllers and repositories by hand for such a well documented protocol as OData seems rather tedious - and ripe for automation.

In summary, interrogating a DbContext, any exposed DbSet<> objects would represent resource collections; from there, the entity type of the DbSet<> has properties that may or may not be exposed as parts of the API, as well as navigation properties that may also be exposed.

The project as it stands now uses:
  • Microsoft.AspNetCore.OData.vNext 6.0.2-alpha-rtm as the OData framework
  • Visual Studio 2017
  • EntityFrameworkCore 1.1.1
  • Asp.net core 1.1.1
  • Asp.net core Mvc 1.1.2 
And generates:
  • OData v4 controllers for each resource collection
  • Repositories for each entity type that is exposed 
  • Proxies for each repository, that intercept pre and post events for CUD actions, and allow for optional delegation to user specified intervention proxies
Attributes are also implemented that allow the generation process to be modified, examples:

Attribute Semantics
ApiExclusion Exclude a DbSet<> from the API
ApiResourceCollection Supply a specific name to a ResourceCollection
ApiExposedResourceProperty Expose a specific entity property as a resource property
ApiNullifyOnCreate Request that property be nullified when the enclosing object is being created

From the example EF project, below is a DbContext marked up with attributes as desired by the author, excluding a couple of resource collections and renaming one:

 public class CompanyContext : DbContext, ICompanyContext {  
   
   public CompanyContext(DbContextOptions<CompanyContext> options) : base(options) {  
   }  
   
   public DbSet<Product> Products { get; set; }  
   
   [ApiExclusion]  
   public DbSet<Campaign> Campaigns { get; set; }  
   
   public DbSet<Supplier> Suppliers { get; set; }  
   
   [ApiResourceCollection(Name = "Clients")]  
   public DbSet<Customer> Customers { get; set; }  
   
   public DbSet<Order> Orders { get; set; }  
   
   [ApiExclusion]  
   public DbSet<OrderLine> OrderLines { get; set; }  
   
 }  

And likewise, for the Customer entity, some markup that exposes some properties as first class 'path' citizens of an API, and ensures that one must be null when an object of type customer is being created via the API:

 public class Customer {  
   
     public Customer() {  
       Orders = new List<Order>();  
     }  
   
     public int CustomerId { get; set; }  
   
     [ApiExposedResourceProperty]  
     [MaxLength(128)]  
     public string Name { get; set; }  
   
     [ApiNullifyOnCreate]  
     [ApiExposedResourceProperty]  
     public virtual ICollection<Order> Orders { get; set; }  
   
 }  

The OData controller generated for the Customers resource collection (which has been renamed 'Clients' by attribute usage) is:

 [EnableQuery(Order = (int)AllowedQueryOptions.All)]  
 [ODataRoute("Clients")]  
 public class ClientsController : BaseController<ICompanyContext, EF.Example.Customer, System.Int32, IBaseRepository<ICompanyContext, EF.Example.Customer, System.Int32>> {  
   
   public ClientsController(IBaseRepository<ICompanyContext, EF.Example.Customer, System.Int32> repo) : base(repo) {  
   }  
   
   [HttpGet("({key})/Name")]  
   public async Task<IActionResult> GetName(System.Int32 key) {  
     var entity = await Repository.FindAsync(key);  
     return entity == null ? (IActionResult)NotFound() : new ObjectResult(entity.Name);  
   }  
   
   [HttpGet("({key})/Orders")]  
   public async Task<IActionResult> GetOrders(System.Int32 key) {  
     var entity = await Repository.FindAsync(key, "Orders");  
     return entity == null ? (IActionResult)NotFound() : new ObjectResult(entity.Orders);  
   }  
   
 }  

The included BaseController performs most of the basic actions required. And then there is the repository generated, with again, a base type doing most of the useful work:

 public partial class ClientsRepository : BaseRepository<ICompanyContext, EF.Example.Customer, System.Int32>, IBaseRepository<ICompanyContext, EF.Example.Customer, System.Int32> {  
   
     public ClientsRepository(ICompanyContext ctx, IProxy<ICompanyContext, EF.Example.Customer> proxy = null) : base(ctx, proxy) {  
     }  
   
     protected override async Task<EF.Example.Customer> GetAsync(IQueryable<EF.Example.Customer> query, System.Int32 key) {  
       return await query.FirstOrDefaultAsync(obj => obj.CustomerId == key);  
     }  
   
     protected override DbSet<EF.Example.Customer> Set { get { return Context.Customers; } }  
   
     public override System.Int32 GetKeyFromEntity(EF.Example.Customer e) {  
       return e.CustomerId;  
     }  
   
   }  

Wednesday, April 19, 2017

Text template engine for generating content

I implemented this a while ago for the startup, been meaning to publish it to github, and now have - here. It was used to support a multitude of text templates that had to be transformed to become part of email message content - and a way to do that which was flexible was required.

The idea is trivial, treat a stream of bytes/characters as containing substitution tokens and rewrite those tokens using a supplied context. Also includes iteration, expression support, context switching and a few other minor aspects.

It's a VS 2017, C# solution, targeting .netcoreapp 1.1 and .net 4.6+.

Once set up, transformation is ultra trivial, assuming a text template and some domain object being supplied to the pro forma method shown below:

        private string GenerateMessage<TObject>(string doc, TObject ctx) {
            ParseResult res = Parser.Parse(doc);
            EvaluationContext ec = EvaluationContext.From(ctx);
            var ctx = res.Execute(ExecutionContext.Build(ec));
            return ctx.context.ToString();
        }




Tuesday, January 3, 2017

REST API with a legacy database (no foreign keys!) - a T4 and mini DSL solution

Overview
Encountered yet again; a legacy database, with no foreign keys, hundreds of tables, that formed the backbone of a REST API (CRUD style), supporting expansions and the like.

It wasn't that there were not identifiable relationships between objects, just the the way the database was generated meant that they were not captured in the database schema. But expansions had to be supported in an OData like fashion. So, assuming you had a resource collection called Blogs, and each Blog object had sub resources of Author and Readers, you should be able to issue a request like the following for a Blog with an id of 17:

http://..../api/Blogs/17?expand=Author,Readers

and expect a response to include expansions for Author and Readers.

That's easy then. Just use entity framework mapping/fluent API to configure an independent association with any referenced objects. Well, that can work and often does. But it does not cope well when some selected OData abstractions are included in the mix - and I was using these to expose some required filtering capabilities to consumers of the REST API. Simply put, when you were creating an ODataQueryContext using the ODataConventionModelBuilder type, independent associations cause it to implode in a most unpleasant fashion.

So, if I can't use independent associations, and each resource may have 1..n associations which are realised using joins, I can:
  • Write a mega query that always returns everything for a resource, all expansions included
  • Write specific queries by hand for performance reasons, as the need arises
  • Generate code that map expansions to specific implementations
Writing by hand was going to be tedious, especially as some of the resources involved had 4 or more expansions.

When I thought about the possible expansions for a resource, and how they can be associated to that resource, using non FK joins, it became apparent that I was dealing with a power set of possibilities.

For the Blogs example, with expansions of Author and Readers, I'd have the power set:

{ {}, {Author}, {Readers} {Author, Readers} }

So the idea that formed was:
  • Use a mini DSL to capture, for a resource, its base database query, and how to provision any expansions
  • Process that DSL to generate pro forma implementations
  • Use a T4 text template to generate C# code
I solved this in one way for a client, but then completely rewrote it at home because I thought it may be of use generally. I then extended that with a T4 template that generates an MVC 6 REST API...meaning all the common patterns that you typically see in a REST API were represented.

The actual VS 2015 C# solution is on Github. The implementation itself is a bit more sophisticated than described here, for reasons of brevity.

DSL 
The purpose of the DSL input file is to describe resources, their associated database base query, and any expansions and how they are realised. There are two formats supported - plain text and JSON.

An elided text file example for Blogs is:

1:  tag=Blogs  
2:  singular-tag=Blog  
3:  model=Blog  
4:  # API usage  
5:  restResourceIdProperty=BlogId  
6:  restResourceIdPropertyType=int  
7:  #  
8:  baseQuery=  
9:  (await ctx.Blogs  
10:  .AsNoTracking()  
11:  .Where(expr)  
12:  .Select(b => new { Blog = b })  
13:  {joins}  
14:  {extraWhere}  
15:  .OrderBy(a => a.Blog.BlogId)  
16:  .Skip(skip)   
17:  .Take(top)  
18:  .ToListAsync())  
19:  #  
20:  expansion=Posts  
21:  IEnumerable<Post>  
22:  .GroupJoin(ctx.Posts, a => a.Blog.NonKeyField, post => post.NonKeyField, {selector})  
23:  #   
24:  expansion=Readers  
25:  IEnumerable<Party>  
26:  .GroupJoin(ctx.Parties.Where(r => r.Disposition == "reader"),   
27:     a => a.Blog.NonKeyField, party => party.NonKeyField, {selector})  
28:  #   
29:  expansion=Author  
30:  Party  
31:  .Join(ctx.Parties, a => a.Blog.NonKeyField, party => party.NonKeyField, {selector})  
32:  .Where(p => p.Author.Disposition == "author")  

Relevant lines:

  • Line 1: starts a resource definition
  • Lines 5-6: allow this DSL instance to be used to generate a REST API
  • Lines 8-18: The base query to find blogs, along with specific markup that will be changed the DSL processor (e.g. {selector}, {joins} and so on)
  • Lines 24-27: A definition of an expansion - linking a reader to a blog if a Party entity has a disposition of "reader" and the column "NonKeyField" of a Blog object matches the same column in a Party object. The expansion results in an IEnumerable<Party> object.
  • Lines 29-32: an Author expansion, this time (line 32) including a predicate to apply

Example class
After running the T4 template over the DSL file, a c# file is produced that includes a number of classes that implement the intent of the DSL instance.

The Blogs class (as generated) starts like this:

1:  public partial class BlogsQueryHandler : BaseQueryHandling {   
2:    
3:    protected override string TagName { get; } = "Blogs";  
4:    
5:    public const string ExpandPosts = "Posts";  
6:    public const string ExpandAuthor = "Author";  
7:    public const string ExpandReaders = "Readers";  
8:      
9:    public override IEnumerable<string> SupportedExpansions   
10:          { get; } = new [] { "Posts", "Author", "Readers"};  
11:    

Points:

  • Line 1: The Blogs query handler class subtypes a base type generated in the T4 that provides some common to be inherited behaviour for all generated classes
  • Lines 5-7: All the expansions defined in the DSL instance are exposed
  • Lines 9-10: An enumerable of all supported expansions is likewise created

Example method
Harking back to the power set comment, a method is generated for each of the sub sets of the power set that represents the query necessary to realize the intent of the expansion (or lack thereof).

Part of pre-T4 activity generates queries for each sub set using the content of the DSL instance. Methods are named accordingly (there are a number of configuration options in the T4 file, I'm showing the default options at work).

As below, the method name generated in T4 for getting blogs with the Author expansion applied is Get_Blogs_Author (and similarly, Get_Blogs, Get_Blogs_Readers, Get_Blogs_Author_Readers).


1:  private async Task<IEnumerable<CompositeBlog>>   
2:    Get_Blogs_Author(  
3:     BloggingContext ctx,   
4:     Expression<Func<Blog, bool>> expr,   
5:     int top,   
6:     int skip) {   
7:      return   
8:          (await ctx.Blogs  
9:          .AsNoTracking()  
10:          .Where(expr)  
11:          .Select(obj => new { Blog = obj })  
12:          .Join(ctx.Parties,   
13:              a => a.Blog.NonKeyField,   
14:              party => party.NonKeyField,   
15:              (a, author) => new { a.Blog, Author = author})  
16:          .Where(p => p.Author.Disposition == "author")  
17:          .OrderBy(a => a.Blog.BlogId)  
18:          .Skip(skip)  
19:          .Take(top)  
20:          .ToListAsync())  
21:          .Select(a => CompositeBlog.Accept(a.Blog, author: a.Author));  
22:    }  

Some comments:

  • Line 1: Declared privately as more general methods will use the implementation
  • Line 3: The EF context type is part of T4 options configuration
  • Line 4: Any 'root' resource expression to be applied
  • Lines 5-6: Any paging options supplied externally
  • Lines 7-25: The generated query, returning an enumerable of CompositeBlog, a class generated by DSL processing, that can hold the results of expansions and the root object

Generated 'top level' methods
As the generated 'expanded' methods are declared privately, I expose 'top level' methods. This makes the use of the generated class easier, since you pass in the expansions to use, and reflection is used to locate the appropriate implementation to invoke.

Two variants are generated per resource class - one for a collection of resources, one for a specific resource. The 'collection' style entry point is:

1:  public async Task<IEnumerable<CompositeBlog>>  
2:            GetBlogsWithExpansion(  
3:               BloggingContext ctx,   
4:               Expression<Func<Blog, bool>> expr = null,   
5:               int top = 10,   
6:               int skip = 0,   
7:               IEnumerable<string> expansions = null) {   
8:    return await GetMultipleObjectsWithExpansion<CompositeBlog, Blog>  
9:                 (ctx, expr, expansions, top, skip);  
10:  }  
11:    
12:    

Comments:

  • Lines 3-7: The EF context to use, along with a base expression (expr) and paging requirements and any expansions to be applied
  • Lines 8-9: Call a method defined in the BaseQueryHandler generated class to find the correct implementation and execute

Example use
Imagine this closely connected to a REST API surface (there is a T4 template that can do this, that integrates with Swashbuckle as well). The paging, expansions and filter (expression)  requirements will passed in with a request from an API consumer, and after being sanitised, will be in turn given to a generated query handler class. So the example given is what one might call contrived.

A concrete test example appears below:

1:  using (BloggingContext ctx = new BloggingContext()) {  
2:   var handler = new BlogsQueryHandler();  
3:   var result = await handler.GetBlogsWithExpansion(  
4:             ctx,   
5:             b => b.BlogId > 100,   
6:             10,   
7:             10,   
8:             BlogsQueryHandler.ExpandAuthor,   
9:             BlogsQueryHandler.ExpandReaders);  
10:   // .. Do something with the result  
11:  }  

Comments:

  • Line 1: Create a context to use
  • Line 2: Create an instance of the generated class
  • Line 3: Call the collection entry point of the generated class
  • Lines 4-7: Supply the EF context, an expression and top and skip specifications 
  • Lines 8-9: Add in some expansions

Customisation
The T4 template has a 'header' section that allows for various options to be changed. I won't go into detail, but it is possible to change the base namespace for generated classes, the EF context type needs to be correct, whether a JSON or text format DSL file is being used, whether the 'advanced' DSL form is used - and so on. The GitHub page supplies more detail.


 // ****** Options for generation ******   
 // Namespaces to include (for EF model and so on)  
 var includeNamespaces = new List<string> { "EF.Model" };  
 // The type of the EF context  
 var contextType = "BloggingContext";   
 // Base namespace for all generated objects  
 var baseNamespace = "Complex.Omnibus.Autogenerated.ExpansionHandling";  
 // The DSL instance file extension of interest (txt or json)  
 var srcFormat = "json";  
 // True i the advanced form of a DSL instance template should be used  
 var useAdvancedFormDSL = true;  
 // Form the dsl instance file name to use  
 var dslFile = "dsl-instance" + (useAdvancedFormDSL ? "-advanced" : string.Empty) + ".";  
 // Default top if none supplied  
 var defaultTop = 10;  
 // Default skip if none supplied  
 var defaultSkip = 0;  
 // True if the expansions passed in shold be checked  
 var checkExpansions = true;  
 // If true, then expansions should be title cased e.g. posts should be Posts, readers should be Readers and so on  
 var expansionsAreTitleCased = true;  
 // ****** Options for generation ******   

Friday, October 21, 2016

Angular 2: Creating decorators for property interception

As part of 'polishing' the esoteric languages testbed Angular 2 SPA, I thought it might be useful to allow for storage of entered source code to be auto-magically persisted. This lead me on a small journey into the ng2 decorator mechanisms, which are surprisingly easy to implement and reminiscent of c# attributes, but without the static limitations.

.Net Core MVC hosted solution on GitHub. Node package source also on GitHub.

The essence of the idea was to be able to decorate a property of a type and have any setting of its value to be automatically persisted - along with a suitable getter implementation.

Sort of as shown below, meaning both the language and sourceCode properties should be persistent. The @LocalStorage decoration implies strongly that this persistence should be in HTML 5 local storage.

1:  export class ExecutionComponent {  
2:    @LocalStorage('ELTB') language: string;  
3:    @LocalStorage('ELTB') sourceCode: string;
4:    programOutput = '';  
5:    programInput = '';  
6:    running = false;   
7:    inputRequired = false;  
8:    
9:    constructor(private _esolangService: EsolangService) {  
10:      console.log('built EC');  
11:    }  
12:  }  

So, how do you achieve this? There are plenty of detailed articles around for how to implement a decorator (at the class, property etc level), so I'm not going to describe it in detail.

It's easier just to present the code below, which has these main points of interest (note that this is aggregated code for presentation purposes from the node package source for this project):

  • Lines 2-7: Define an interface that represents the 'shape' of an object that can act as an interceptor for property gets and sets
  • Lines 9-14: Another interface, that defines the contract for an options type; one that can be passed as part of the decorator if it is required to achieve more finely grained behaviour, supply a factory for creating DelegatedPropertyAction instances and so on
  • Lines 16-35: the local storage decorator function entry point, that can be called with a union of types; either a string or an object that implements the AccessorOptions interface
  • Lines 37-39: a decorator function entry point for allowing general property interception e.g. as in @PropertyInterceptor('{ storagePrefix: "_", createJsonOverride: false}). An example is show later on.
  • Lines 41-82: A function that returns a function that implements the general property interception behaviour, with its behaviour directed somewhat by an instance of  AccessorOptions
  • Lines 85-113: An implementation of a DelegatedPropertyAction that gets and sets based on local storage


1:    
2:  export interface DelegatedPropertyAction {  
3:    propertyKey: string;  
4:    storageKey: string;  
5:    get(): any;  
6:    set(newValue: any): any;  
7:  }  
8:    
9:  export interface AccessorOptions {  
10:    storagePrefix?: string;  
11:    factory?(propertyKey: string, storageKey: string): DelegatedPropertyAction;  
12:    preconditionsAssessor?(): boolean;  
13:    createToJsonOverride?: boolean;  
14:  }  
15:    
16:  export function LocalStorage(optionsOrPrefix: string | AccessorOptions) {  
17:    function ensureConfigured(opts: AccessorOptions): AccessorOptions {  
18:      opts.preconditionsAssessor =  
19:        opts.preconditionsAssessor ||  
20:        (() => window.localStorage && true);  
21:      opts.factory =  
22:        opts.factory ||  
23:        ((p, c) => new LocalStorageDelegatedPropertyAction(p, c));  
24:      return opts;  
25:    }  
26:    return AccessHandler(  
27:      ensureConfigured(  
28:        typeof optionsOrPrefix === "string" ?  
29:        <AccessorOptions>{  
30:          storagePrefix: optionsOrPrefix,  
31:          createToJsonOverride: true  
32:          }  
33:          : optionsOrPrefix  
34:      ));  
35:  }  
36:    
37:  export function PropertyInterceptor(options: AccessorOptions) {  
38:    return AccessHandler(options);  
39:  }  
40:    
41:  function AccessHandler(options: AccessorOptions) {  
42:    return (target: Object, key?: string): void => {  
43:    
44:      function makeKey(key: string) {  
45:        return (options.storagePrefix || '') + '/' + key;  
46:      }  
47:    
48:      if (!options.preconditionsAssessor || options.preconditionsAssessor()) {  
49:    
50:        let privateName = '$__' + key, storeKey = makeKey(key);  
51:    
52:        target[privateName] = options.factory(key, storeKey);  
53:    
54:        Object.defineProperty(target, key, {  
55:          get: function () {  
56:            return (<DelegatedPropertyAction>this[privateName]).get();  
57:          },  
58:          set: function (newVal: any) {  
59:            (<DelegatedPropertyAction>this[privateName]).set(newVal);  
60:          },  
61:          enumerable: true,  
62:          configurable: true  
63:        });  
64:    
65:        const notedKey = '_notedKeys', jsonOverride = 'toJSON';  
66:    
67:        target[notedKey] = target[notedKey] || [];  
68:        target[notedKey].push(key);  
69:    
70:        options.factory(notedKey, makeKey(notedKey)).set(target[notedKey]);  
71:    
72:        if (options.createToJsonOverride && !target.hasOwnProperty(jsonOverride)) {  
73:          target[jsonOverride] = function () {  
74:            let knownKeys = Array<string>(target[notedKey]);  
75:            let result = { _notedKeys: knownKeys };  
76:            knownKeys.forEach(x => result[x] = target[x]);  
77:            return result;  
78:          };  
79:        }  
80:      }  
81:    }  
82:  }  
83:    
84:    
85:  export class LocalStorageDelegatedPropertyAction implements DelegatedPropertyAction {  
86:    
87:    storageKey: string;  
88:    propertyKey: string;  
89:    private val: any;  
90:    
91:    constructor(propertyKey: string, canonicalKey: string) {  
92:      this.propertyKey = propertyKey;  
93:      this.storageKey = canonicalKey;  
94:      this.val = JSON.parse(this.read());  
95:    }  
96:    
97:    get(): any {  
98:      return this.val;  
99:    }  
100:    
101:    set(newValue: any) {  
102:      this.write(JSON.stringify(newValue));  
103:      this.val = newValue;  
104:    }  
105:    
106:    private read() {  
107:      return localStorage.getItem(this.storageKey) || null;  
108:    }  
109:    
110:    private write(val: any) {  
111:      localStorage.setItem(this.storageKey, val);  
112:    }  
113:  }  

So, a contrived re-writing of the very first example, which adds no real value, could be:

1:  @LocalStorage('ELTB') language: string;  
2:  @LocalStorage({   
3:     storagePrefix: 'ELTB',   
4:     factory: (p, c) =>   
5:       new LocalStorageDelegatedPropertyAction(p, c) })   
6:    sourceCode: string;  

The solution on GitHub is a trivial test one, an example from its use is below, showing local storage contents mirroring the page content: