onsdag 4 januari 2017

You might (not) need a JavaScript library

JavaScript today is like:
"Yeah great, and now you need Webpack, Babel, npm, yarn, React ...".

Redux? Relax.

Developing a front end web project of today can be a bit intimidating, especially when the home base is back end development. What about all those frameworks and tools, why use them?

I am currently learning about React, and I like it. React have JSX, ES2017 and a nice logo. That is cool, but my favorite thing with React is how code is organized. A user interface is built with components: small and isolated "packages" of html & JavaScript. That is a pattern I like.

Components with vanilla JavaScript?
When learning about React patterns, I started thinking about how this could be done without using React, ES2017, Webpack or any of the other libraries and frameworks out there. Is that even possible?

Okay, but why?
I want to learn and understand the problems that are solved using a library. Also, I want to find out what problems can be solved without an npm install. One way of doing that is to write all code with plain old vanilla JavaScript, html, css and find out what pain is removed by which library. Also, I think it would be a fun challenge!

Example code
You will find all code referenced in this blog post at my GitHub page

No build step required
So, I spent some late nights coding and learning. The code in the main branch of this repo does not require any build steps or npm package downloads. The "listItem component" is made of two parts: JavaScript in a code file and an html template in a separate file. The render function will create a DOM object containing html from the template, with data that is passed in to it, and return it in a callback.

code from listItem.js:
      function render(props, done) {

        // load the template using a helper
        templates.load('/src/listItem/listItem.html', function (el) {

          // 'el' is the html template element
          el.textContent = props.data;
          el.addEventListener('click', props.onClick);

          // pass the element to be added to the DOM
          if (done) {
            done(el);
          }
        });
      }

      return {
        render: render
      };
    
The listItem.html template:
      <li class="listItem" title="the listItem component"></li>
    
This is of course a very simplistic example with a single tag html template, but I think it already highlights issues: where's the data added? To understand where data is added, we have to read & understand the contents of the render function. I think it would be nice if the data to be rendered is visible in the actual template.

Time to grow a Mustache?
A template render engine can solve that. Here's the same component, using a template engine called Mustache.js. You will find the code in a separate branch of the GitHub repo.
      function render(props, done) {
        // pass data to a modified template loader
        templates.load('/src/listItem/listItem.html', props, function (el) {
          el.addEventListener('click', props.onClick);

          if (done) {
            done(el);
          }
        });
      }
    
compare with the vanilla code

The templates helper now use Mustache to render the html from the template and the data. It is now possible to write html templates with placeholders for data like this:
      <li class="listItem" title="the listItem component">{{data}}</li>
    
code from the templates.js file:
      container.innerHTML = Mustache.render(template, data);
    
More issues?
If you look at the source code in the main branch you'll notice the JavaScript files is written with a coding style called IIFE (immediately invoked function expression). It is used to isolate code and makes it possible to write modules without using any framework. Also, every single file is added with a script tag in the html body of the main page (index.html). Some modules depend on others and have to be added in the correct order. That's not great.
      <script src="src/templates.js"></script>

      <script src="src/listItem/listItem.js"></script>
      <script src="src/list/list.js"></script>
      <script src="src/terminal/terminal.js"></script>
      <script src="src/nav/nav.js"></script>
      <script src="src/logView/logView.js"></script>

      <script src="src/app.js"></script>
    
Solution: JavaScript AMD modules
In a separate branch, I have converted all of the immediately invoked function expressions (IIFE) to AMD modules. I use Require.js that takes care of module loading and dependencies. Instead of a very long list of html script tags, there is only an entry point defined.

from the index.html file
      <script data-main="src/app" src="lib/vendor/requirejs.js"></script>
    
Here is the listItem component as an AMD module:
      define(['templates'], function (templates) {
        function render(props, done) {
          templates.load('/src/listItem/listItem.html', props, function (el) {
            el.addEventListener('click', props.onClick);

            if (done) {
              done(el);
            }
          });
        }
        return {
          render: render
        };
      });
    
compare it with the previous branch

But wait. Don't we have native modules in Javascript now?
Oh, I forgot. It is 2017 and ECMAScript 2015 was released almost two years ago. A nice module system was included in it. Finally there is a common standard in the language! I have rewritten the code to ES2017 style - with arrow functions, the const keyword and most importantly, the ES import/export feature. Now, the listItem component looks like this:
      import load from 'templates';

      export function render(props, done) {
        load('/src/listItem/listItem.html', props, (el) => {
          el.addEventListener('click', props.onClick);

          if (done) {
            done(el);
          }
        });
      }
    
compare the ES2017 code with old school JavaScript

I think the code has improved a bit. ES2017 is great, but there are trade offs to be aware of. Many browsers don't have enough support for this version of JavaScript yet. To make it work in all kinds of browsers and devices we need to introduce a build step: the code need to be compiled from ES2017 to vanilla JavaScript with Babel.

The package.json file in the project has quite a few scripts compared to the original framework-and-build-step-free version. In addition to dependencies like Mustache.js and Require.js, there is a compile step and a Babel polyfill dependency added:
      "scripts": {
        "deps:lib": "mkdir -p -v lib/vendor",
        "deps:requirejs": "cp node_modules/requirejs/require.js ...
        "deps:mustache": "cp node_modules/mustache/mustache.min.js ...

        "deps:polyfill": "cp node_modules/babel-polyfill/dist/polyfill...
        "deps": "npm run deps:lib && npm run deps:requirejs && npm run ...
        "transpile": "babel src --out-dir lib --source-maps",

        "lint": "eslint src",
        "build": "npm run lint && npm run transpile && npm run deps",
        "start": "npm run build && live-server"
      }
    
More frameworks, more problems?
When browsing the page there is now a couple of third party libraries loaded to the client, besides our own modules. This might cause a not so great experience for users with a slow connection.

Bundling & minification
While we're at it, why not add another build step that will bundle all JavaScript files to one single file? This will reduce the number of requests from the browser. With minification we also will get rid of a couple of Kilobytes. The entry point is now one bundled and minified JavaScript file.
      <script data-main="lib/bundle/main"
              src="lib/vendor/requirejs.js"></script>
    
The source code in this branch is compiled from ES2017 to browser friendly AMD modules. With Require.js, there is a tool for bundling & minification included (called R.js) and used in this branch. compare the branches

Heard about Webpack?
The scripts section of the package.json file is quite massive now and probably difficult to understand. By using Webpack, most of those build steps are no longer necessary. Webpack does a lot of things, it's like a swiss army knife (that's both good and bad, I guess).

package.json with Webpack:
      "scripts": {
        "lint": "eslint src",
        "build": "npm run lint && webpack",
        "start": "webpack-dev-server"
      }
    
compare the two branches, with bundling vs with Webpack

Where did it all go, how is that even possible? Okay, I forgot to mention Webpack.config. Sorry. Some of the build magic live in that file now.

So, did Webpack make any difference?
One nice thing with Webpack is that there is no longer any need for Require.js. Webpack will resolve ES2017 modules and convert them to plain vanilla JavaScript before the bundling & minification. Also, Webpack has a local dev server feature (with auto reloading on file change) that I like.

Add React to the mix
This is how the listItem component looks like when converted to React. The template files are gone, everything is written in the JavaScript modules using the JSX syntax. There is no longer need for a custom template loader or mustaches. Compared to the source code in the previous branch, this one has less code. I like less code.
      import React from 'react';

      function ListItem(props) {
        return <li className='listItem' title='the listItem component'
        onClick={props.onClick}>{props.data}</li>;
      }

      export default ListItem;

    
React: Before vs After

Conclusion
By experimenting with one library at a time, I have learned about the value added and also some of the trade offs that comes with using a tool or a framework. Sometimes, plain vanilla JavaScript is just enough, and sometimes a framework or library will make life easier. You might (not) need a JavaScript library.

tisdag 10 maj 2016

Unit test Commerce projects with FakeMaker

FakeMaker is updated with support for unit testing Episerver Commerce code. I am very excited about this and hope it will be useful in your projects. The new version is made with contributions and feedback by users of FakeMaker. Thank you! 

Install the new plugin to FakeMaker - called FakeMaker.Commerce - from NuGet, or check out the source code on GitHub. The package currently suppport basic unit testing scenarios. What kind of features are missing? Contact me by creating a GitHub issue, sending a pull request or just write a comment on this blog.

What is FakeMaker?
FakeMaker takes care of mocking and a simplifies creating fake content, that you can use when testing your code.

Unit testing the EPiServer CMS got a whole lot easier alredy with version 7. However, creating fake content to be used when setting up mocked repositories isn't always that smooth. When mocking of code is all you see on your screen, this little library may help. A bigger screen probably also would, but is probably more expensive. FakeMaker make it easier to write unit tests for mvc controllers and helpers that expect the episerver repositories to return content. And now also for creating fake Commerce content.

How does the Commerce support work?
Just like when creating fake pages in FakeMaker

var page = FakePage.Create("MyPageName");

you can create fake products with FakeMaker.Commerce in the same way:

var product = FakeProduct.Create<ProductContent>("My Fake Product");

Add pages and products to the repository and you are ready to go.

fake.AddToRepository(product);

Get FakeMaker
Read more, have a look at the source code and contribute to the project: https://github.com/DavidVujic/EPiServer-FakeMaker

FakeMaker NuGet package: https://www.nuget.org/packages/FakeMaker/
FakeMaker.Commerce NuGet package: https://www.nuget.org/packages/FakeMaker.Commerce/

onsdag 9 mars 2016

JavaScript async: the future looks promising

In the beginning there was the Pyramids.

Okay, that was not exactly true. We actually had Ajax before that. With ajax, we also got callbacks. Some of the mighty callback pyramids were built and are still standing.

About one year ago, ECMAScript 2015 (aka ES 6) was released and brought native support for Promises to JavaScript. Does that mean that it now is possible to build smaller callback pyramids, instead of the mighty ones? Yes. We also have a standardized and more reliable way of writing async JavaScript code. We can write async code that behaves in a more predictable way when things go wrong.

Here's an example.

// the async consumer code, passing an url and a callback
get('path/to/my/serverside/api', (response) => {
// handle the response here
});


// the callback based ajax library that we use
function get(url, onSuccess, onFail) {
// ajax things here
// pass the result to the provided callback function
onSuccess(result);
}

What if the ajax library at some point would ... go crazy?

onSuccess(result);
onSuccess(result);
onSuccess(result);


The callback will be executed several times and it is out of our control. However, this problem can be solved by using promises. Wrap the ajax library in a promise function and consume the wrapper instead:


// the wrapper
function promiseGet(url) {
    
    return new Promise ((resolve, reject) => {
        get(url, resolve, reject);
    });
}


// consume the promisified version
promiseGet('path/to/my/serverside/api')
.then((response) => {
    // handle the response here
});

The promise will be resolved only once, even if the ajax library goes bananas.


The bus stop
This reminds me of when I was on the bus with the kids a couple of days ago. The little one wanted to press the shiny "bus-will-stop" button. When we got closer to our stop, the kid pressed the button and got the expected feedback (a "bus-will-stop" signal!). That was fun! So, he pressed the button again. But nothing happened. Naturally, he tried a couple of times more. Nothing happened.

Just like the "bus-will-stop" button, a promise is resolved only once. I think the bus driver (and passengers) appreciate the feature.


The async brain?
Async JavaScript code - with or without Promises - isn't always the easiest thing to understand. Promises does not really solve the mismatch between how our brains are wired and the flow of callbacks & thenable functions. We have learned how to write async code, and sometimes even understand it. But wouldn't it be cool if we could write something sequential like this?


var result = get('path/to/my/serverside/api');

console.log(result);


Of course, the code above won't work. 

I will try again, by using some of the magic of generators. A generator enable pause and continue functionality to a function. You can jump back and forth between a generator and the consumer code.


function* myGenerator() {

    var result = yield get('path/to/my/serverside/api');
    console.log(result);

}


That code won't work out of the box either. We need some help from a library. This library (the run function) will help us go back and forth between the generator and the consumer code, and resolve promises.


import run from 'async-runner';

run(function* myGenerator() {

    var result = yield get('path/to/my/serverside/api');
    console.log(result);

});


This will work. (here's the code used in this post)

Sequential async?
Okay, cool. It looks nice and sequential, at least when focusing on the rows within the function. But wait. Will people understand code with funky star functions and weird libraries? There has to be another way. I think the upcoming ECMAScript async & await feature can help us write easy to understand sequential code, without funky star functions or weird libraries.

Let's remove some code from our previous example:

import run from 'async-runner';

run(function* myGenerator() {

    var result = yield get('path/to/my/serverside/api');
    console.log(result);

});


And add some sweet futuristic async & await JavaScript syntax sugar:

async function myAsyncCode() {

    var result = await get('path/to/my/serverside/api');
    console.log(result);

};

With this, I think it actually is possible to use the words "readable" and "async" in the same sentence. Behind the scene, the async & await feature is a combination of Promises and Generators. I think the future of async programming looks very promising. Maybe we don't have to wait for it either. The transpiler Babel has already support for async & await today. Perhaps it is time to go back to the future?


here's the code used in this post

lördag 6 februari 2016

FakeMaker is updated with Episerver 9 support

Hi Episerver developers!

I have updated FakeMaker (helping you unit test your Episerver code) to support the version 9 binaries. Breaking changes caused compile errors (oops!) and made the tool incompatible with the latest versions of Episerver. It turns out that the base library context no longer has to be mocked, and that's a good thing! The solution was simple: delete code. I love deleting code, maybe even more than writing code.

FakeMaker may help you delete your code.

When writing unit tests, creating fake content and mocked repositories can be a bit depressing. When mocked code is all you see on your screen, this little library may help. A bigger screen probably also would, but is probably more expensive (and won't cure depression).

The latest version is available for download on NuGet, and the source code is on GitHub. Both are targeting version 9.6.1 of the Episerver Core libraries.

If you need Episerver 8 support, download an earlier version from NuGet, or clone the GitHub repo and checkout the epi-8-support branch

Please let me know if you have feature requests or run into any issues.


Here's an example of what you can do with FakeMaker: 
you are about to write code that uses the episerver page tree to find pages of a certain type, children of a root node, or maybe just the start page. FakeMaker makes easier to write unit tests for code like that, by creating an in memory page tree and add it to a fake content repository.

Create the pages you need:
var page = FakePage.Create("MyPageName");

or a page of a specific page type:
var myCustomPage = FakePage.Create<CustomPageData>("MyOtherPageName");

Create an instance of FakeMaker:
var fake = new FakeMaker();

Add it to the mocked repository:
fake.AddToRepository(page);

Want more examples?
You will find more examples at the FakeMaker GitHub repo.

fredag 5 februari 2016

Windows? No, I'm a .NET developer.

A couple of days ago, I talked about and coded some of the new ASP.NET Core 1.0 things at the Swetugg 2016 conference in Stockholm. I used my Ubuntu laptop, the one I write code on every day at work (also used for occasional night coding at home, when the kids are sleeping).

Nowadays, I mostly write Python and JavaScript code, both are platform independent languages. I love it! When I was about to begin where I currently work, I knew very little about what kind of dev tools the team (or even the Python community) were using.

"Well, some of us use editors like Sublime or Vim, some use an IDE like PyCharm. Most of us have macbooks, but a couple of team members have various Linux distros installed on their computers. It doesn't matter. You'll be fine as long as you can run the 'make' command. What stuff do you use?"

I think my brain exploded.

Remember this movie scene? That was me.



(it's from the 1990 version of Total Recall. The AI/robot/thing gets an unexpected question from a human, doesn't know how to respond and finally breaks.)

I have never, ever, in my life as a software developer heard that question before! .NET development usually equals Windows and (often a specific version of) Visual Studio, at least from my experience.

On my first day at work I was very excited to use Ubuntu, code in PyCharm and write strange terminal commands. Today, a couple of months later, the terminal has become my new best friend. Ubuntu is fast, responsive and doesn't get in my way. Well, sometimes a bit tricky to use when native apps are missing, but that's fine. It feels like I'm on a diet.

It would be great if we, the .NET people, also were able to freely choose our favourite tools, editors and even operating system.

Why?
Because it's fun! We shouldn't underestimate the value of fun. If the tools we use are platform independent, we can be platform independent as developers too. That will bring us closer to other dev communities. Chances are we will learn (and share) more than when staying in isolation. Being a platform independent developer is a good thing.

In fact, .NET developers can do all of that now.

I have been following the development of ASP.NET 5 (recently renamed to ASP.NET Core 1.0) from the early beta versions, and started experimenting with it on my Ubuntu Linux machine a couple of months ago. This is my current setup: write C# code in Atom, push commits to GitHub, trigger builds and publish to Azure Web Apps when pull requests are merged.



Developing on a Linux dev machine and publishing to a cloud based Windows machine.

Atom, really? I know, Visual Studio is hard to compete with, especially when used together with ReSharper. But editors like Atom are lightweight. I like that. They are not only lightweight, they also have the possibility to install plugins like OmniSharpOmniSharp add editor features close to the ReSharper experience.

Lightweight tools need friends, and one of them is the terminal. It might be a bit intimidating at first if you are not already a command line user. It was for me. Now, I rather write oneliners like this, than right clicking some menu option or opening a dialog window:

git checkout -b my-new-feature-branch

Once learned, it is so simple. Let's stay in the command line window, because there is another essential tool for platform independent .NET development: Yeoman.




With Yeoman, we get the command line version of "File -> new project". You can also install Yeoman as a plugin to Atom. I usually create a new C# project from the command line and later add files using the Atom plugin.

What about Mono?
I haven't figured out how to exclusively use the new modular CoreCLR on my Linux machine. I can't get it to work without selecting the Mono version in the .NET version manager (the dnvm command). I guess it will be fixed soon, but it doesn't really matter if you want to get started now. Mono is the platform independent version of the system wide installed .NET Framework on your Windows machine.

Besides that, Mono comes with a great tool: the REPL.

Great for experimenting and learning about new cool C# features. Just type the word csharp in the terminal and start experimenting & learning.




So far, the .NET development on Linux experience has been very nice.