Long Time No Updates: Here’s Some Piano Sheet Music!

So it’s been quite some time since I’ve written any posts (and even longer time since any of them were music related!) so I’ve uploaded 3 new pieces of piano sheet music to my Music page!

(Special thanks to Iker Estalayo’s YouTube channel for providing the basis of these beautiful piano renditions of music from DBZ Super’s OST!)

I’ve been wanting to get back into transcribing pieces for piano, so if you find these interesting or would like to make a request please leave a comment! I find that transcribing pieces from anime and games is pretty relaxing and helps keep my music chops fresh. I’m definitely hoping to do more of these for the days to come.

In addition, I should note that this year I plan to shift the focus of the blog towards game development. Up until now I have spent most of my time in the professional web development space, but my true passion is games. I hope to share all sorts of knowledge I’ve amassed over the past decade – hopefully something I’ve learned will be useful to you readers out there!

PostScriptum.js Deprecated (And Some Tips on JS Promises)

So today I’m officially deprecating my Promise-like JavaScript library, PostScriptum.js.

While it had a good run and indeed solved a problem I was having with a huge project, I’ve been using Promises long enough now to realize there are plenty of ways to handle my use case with Promises if implemented correctly. It really just comes down breaking down your promises into operations that are as small as possible and not being afraid to nest Promises just a little bit when control flow necessitates it.

There was also another major thing that turned me off about Promises initially, though I’ve had a change of heart recently. It was about the proper way to cancel Promise chains, since it’s very easy to get stuck in an anti-pattern if you’re not careful in how you write rejected handlers. What I found really helpful is described by the diagram in section 4.2 of this article:

(Read the rest of the article too if you have time, it’s extremely helpful)

The diagram in that section describes the control flow after a promise is rejected, and it looks very much like a literal game of catch where errors are thrown from the left-side of the diagram and caught on the right. What I didn’t quite understand before is how Promises try really hard to not to stay in a failed state, and they sort of “toss” back to the left side of the diagram unless you explicitly throw another exception. I know that might sound weird, but another way to put it is as follows:

Promise chains that land in a rejected handler will always “bounce” back to the next subsequent resolved handler (provided that no additional errors are thrown within the rejected handler, in which case it would move to the next rejected handler)

This is crucial when managing errors, since if you’re not careful you’ll end up from a rejected handler back in your resolve handlers where you really didn’t want to be. The easiest way to avoid the above problem altogether is to only ever have 1 rejected handler in any Promise chain. Remember that catch() is just syntactic sugar for then(undefined, rejectedHandler). So once you understand that much and realize how Promises flow down the resolved side VS the rejected side, things start to make a whole lot more sense.

Maybe at some point I’ll do a more in-depth post about how promises work and how to avoid certain anti-patterns that I ran into. But for now, don’t use PostScriptum.js and grab another library such as Bluebird, RSVP, or even just use straight up ES6 promises.

AS3JS 0.3.0 Released

So I thought I’d do a brief post about version 0.3.0 of AS3JS, which I released a few days ago. It comes with a bunch of various bug fixes, a few new configuration options, and an all-new live browser demo of everything in action!


The biggest change about this update however, is that I’ve separated the output “program” from the “loader” for the library. In other words, output from the AS3JS library is still “vanilla” JavaScript for the most part, but I’ve stripped out the part that actually initializes your application and placed it in a separate script. The main reason behind this is that I had always wanted to build a browser demonstration of AS3Js, but it seemed like all the extra boilerplate could be a turn-off for some people to look at. So by writing out the AS3 program as a basic JS object containing a hash of class names to modules, it allowed me to drastically reduce the amount of text in the output file and generalize the loading process into a separate function. So it’s just a matter of calling AS3.load(…) and passing in your “Program” object, and it will start up your application at the entry point that you specify.

Definitely be sure to give the demo a try! With this update I hope to more cleary demonstrate both  the similarities between AS3 and JS, as well as distinguish the load process for running a package-based app versus a traditional module-based one.

PostScriptum.js – A Promise-Free Promise Library

UPDATE: This library is now deprecated!! Please read here for more details:



So I released another JavaScript library recently called PostScriptum.js (or PS for short) that provides a mechanism of managing asynchronous callbacks.  It resembles a promises library in terms of syntax, but is far from the Promises/A+ specification.  It basically acts as a utility to allow you to write asynchronous chains that can, contrary to traditional promises, share data between them like this:

PS.create(authenticateUser('bob', 'pass123'), function (user) {
  this.proceedWith([user], [user.user_id])
}).pthen(getUserPosts, function (user, posts) {
  console.log(user.name + ' has ' + posts.length + ' posts.');

The code above demonstrates an asynchronous API call for logging in a user, and using the obtained user ID to fetch their posts.

But you still might be wondering, what makes this any better than a regular promise library?

They key difference here is the control-flow of your asynchronous calls. In a typical “promise” flow the moment you execute a promise function, it is queued to start at the beginning of the next JavaScript event loop tick. While this works just fine in simple use cases, it ends up restricting you to a promise-specific workflow. All of your functions that rely on asynchronous callbacks need to be wrapped by a promise library in order make them chainable, and you are offered no additional constructs to share data between promises (resulting in the reliance on hoisted variables that share data). To give a quick example, take a look again at the code I posted above using traditional promise syntax:

var shared = {};
authenticateUser('bob', 'pass123')
  .then(function (user) {
    shared.user = user;
    return getUserPosts(user.user_id);
  }).then(function (posts) {
    console.log(shared.user.name + ' has ' + posts.length + ' posts.');

Looks similar, no? But take a closer look at the usage of the shared variable. Without it, we wouldn’t have a way to access the user anymore within the subsequent then() call. This is because regular promises can only pass one argument to the then() callback, and that argument is always the return value of a promise. As a result, any promises you create can only return one piece of information. You have to pass around extra redundant information with each promise via an object, whether it be through the promise return value itself or as a shared object like the example above. Of course you could alternatively nest the second then() call within the first, but in my opinion doing that would defeat the purpose of promises as a tool for flattening asynchronous callbacks.

Another difference is that normal promise libraries don’t require you to execute a run() function. While it is definitely useful to have promises be queued up to execute right away, I chose not to do this in PS due to the compositional nature of the library. PS allows you to compose your asynchronous execution flow before actually executing anything. I personally found that adding explicit developer intent behind the execution made this type of code easier to follow by separating definition from implementation, and I figured that 6 extra characters couldn’t hurt. It also only needs to be called once on the root PS instance and you’re good to go from there.

In addition, PS by default does not queue your asynchronous call to the next tick of the event loop. Instead, it assumes the function you provided is already asynchronous and executes it immediately. The reason for this is that I figured if you’re already presumably using the library with asynchronous functions, why forcibly stack an extra setTimeout() on top of everything? If you want the function you called to be executed on the next event loop tick you can use a flag for that. This library doesn’t attempt to forcibly change how the libraries you are using currently behave.

The final major difference is that PS by default does not consume errors. The way promises are required to consume errors is my biggest pet peeve about them, in that they tend to obliterate useful details about how and where an error occurred. There is nothing worse than trying to debug a library that consumed all of your error output, so I decided it would make more sense to trust the developer to write proper error handling for their asynchronous calls in the first place. Sure it helps if you have a thorough understanding of how promises work, but it doesn’t make sense to me that we would even allow this to be possible in the first place without explicit developer intent. If you need to capture errors with PS you can do it via the catch() function, or by using the pthen() function (see the documentation for details on that)

Anyway, I know that this article might seem very “anti-promises”, but I should make it clear that I do think there’s definitely a place in JavaScript for promises. Traditional promises can definitely make callback chains cleaner in many cases, but they do not address the main cause of callback hell for me personally which was the need to exchange information between callbacks. I created PostScriptum.js for devs who want an alternative to promises without too much abstraction, and simple mechanisms to pass data along callback chains. Promises definitely still have a place in the world, but I don’t think it can hurt to have an alternative out there.

ImportJS 3.0 Overview

Since I’ve updated ImportJS to version 3.0 recently (and even released a library based on it) I wanted to write a little bit about what ImportJS is and why I love using it.

Let me begin by mentioning that ImportJS is something I created a few years ago because I wanted a way to write modular code that felt familiar to me, and I wasn’t impressed by the alternatives at the time. I’m not going to spend much of this article bashing popular modular loaders, but I will at least say that many of them just seemed to over-complicate what I thought a module system needs to do. I’m sure there are many reasons why certain decisions were made about these systems, but I’m going to present my personal take on writing modules for the web.

ImportJS Is About Modules

In case you are unfamiliar with what a module is, think of it as a standalone section of code (usually one module per file) that contains a small part of your application. They may also be called “packages” in other languages, but for simplicity’s sake let’s just stick with JavaScript terminology. Modules can reference other modules by importing them, as well as expose their own properties and functions so that other modules can use them. Well-designed modules are designed to have very few dependencies, in that they don’t rely on too many other modules to work properly. Below is an example of a very simple module pattern in Node.js/CommonJS syntax:

/* my-module.js */

//Import other modules as you please
var SomeOtherModule1 = require('./some-other-module1');
var SomeOtherModule2 = require('./some-other-module2');

//Set up what you want to expose
var MyModule = function () {};

MyModule.prototype.foo = 0;
MyModule.prototype.increment = function () { this.foo++; };

//Export what you want to expose to other modules
module.exports = MyModule;

/* my-app.js */

//Use a module like so
var MyModule = require('./my-module');

var instance = new MyModule();
console.log("foo is: " + instance.foo); //Prints "foo is: 0"
console.log("foo is: " + instance.foo); //Prints "foo is: 1"

This is a very commonly used coding pattern among server-side and desktop JavaScript applications, but it doesn’t work all that well in the web browser. On a command-line Node.js application, scripts don’t have to be downloaded so require() can run synchronously. On the web, you would have to bundle your code in advance in order to simulate this feature using a build tool like webpack or Browserify (or perhaps use an asynchronous loader like RequireJS). The upcoming version of JavaScript known as ES6 has a decent solution, but until the specification is implemented in all browsers we’ll be hopping around different build tools for quite some time.

ImportJS Is Like CommonJS

ImportJS resembles CommonJS syntax, but for the browser:

/* my-module.js */
ImportJS('MyModule', function (module, exports) {
  // this.import() for first-party dependencies
  var SomeOtherModule1 = this.import('SomeOtherModule1');
  var SomeOtherModule2 = this.import('SomeOtherModule2');
  // this.plugin() for third-party dependencies
  var $ = this.plugin('jquery');

  var MyModule = function () {};

  MyModule.prototype.foo = 0;
  MyModule.prototype.increment = function () { this.foo++; };

  module.exports = MyModule;

/* my-app.js */

//Load files before starting app
  baseUrl: 'js/',
  packages: ['my-module.js'],
  ready: function () {
    var MyModule = ImportJS.unpack('MyModule');

    var instance = new MyModule();

    console.log("foo is: " + instance.foo); //Prints "foo is: 0"
    console.log("foo is: " + instance.foo); //Prints "foo is: 1" });

As you can see above it’s not truly the same as CommonJS because its versions of require(), the import() and plugin() functions, have a different usage.

In ImportJS, modules are stored via ImportJS.pack(). They can be retrieved later with ImportJS.unpack() while outside of a module, or this.import() while inside one. You can use any string you’d like for a module name, however I highly encourage you use Reverse Domain Name Notation if you plan to use the asynchronous loading capabilities of the library. ImportJS has a static preload() function that can load dependencies asynchronously, and it will parse the function body of each dependency dynamically as they load to determine what other files to load. The file names map from Reverse Domain Name Notation to the actual name of the file (so “Module” maps to “./Module.js”, and “com.AnotherModule” would map to “./com/AnotherModule.js”, etc). All of this is always relative to the root path you set across all files regardless of depth, which means you no don’t have to type out relative dependency paths in a project 😉

Code imported via this.plugin() actually does the same thing as import(), however it is reserved for asynchronous dependency loading. While fetching files dynamically, if ImportJS encountered a this.plugin(“jquery”) call it would search in a “plugins” folder at the root of your project for “plugins/jquery/jquery.js”. This creates a separate load context for ImportJS, so that the plugin resolves its own dependencies before allowing any load completion callbacks to trigger. The plugin’s dependencies have an entirely separate namespace, so duplicate module names will not conflict with your main project files. The purpose of this is to separate the concerns for your third-party dependencies from your own, and I have found this pattern to be useful across many projects I’ve worked on.

Finally, we have the ImportJS.compile() function which is a fancy way of “unpacking” all of your modules at once. ImportJS doesn’t actually run the code in your modules until you extract it with unpack(). In the above example you could technically remove the compile() function and ImportJS would still automatically execute your module before unpack() starts. This auto-unpacking includes any modules imported via import(), though it is recommended to use the compile() function to add developer intent that the application is ready to be unpacked and executed.

ImportJS Is Not Barely A Module Loader

Say what??? After all that I had just said this might sound a bit odd considering what most module loaders are made for, but ImportJS at its core really isn’t a system for loading modules. It would be more accurate to describe it as a system for storing modules. While it does have features to fetch dependencies asynchronously, it can work just fine when used purely as a tool for namespacing your code. You can comfortably write your source code in separate files and bundle them together for deployment without worrying about a dependency graph, since you control when to actually start unpacking all of your modules.

At this point you might be wondering “isn’t this just a glorified global object used for namespacing?”. The short answer to that question is yes, it’s a global object you can use to store and retrieve other objects. However, the key feature I’ve yet to cover is injecting logic between the post-execution and  pre-initialization phase of an application. I’ll explain what I mean by this shortly, but first let me give some background into what led up to this.

Concern#1 – Asynchronous Loading

So the first obstacle I ran into when starting to write JS full-time was managing dependencies that may not all be loaded simultaneously. This issue doesn’t exist in most non-web platforms since all of the code comes bundled in one package. For example, I come from an Flash/ActionScript background which has a packaging system that’s very similar to Java. As such, dependencies are all taken care of at compile time and you don’t have to worry about if everything has been loaded.

When you bundle all of your code together for a JavaScript application into one file these problem tend to go away, although the solution almost always involves using a module loader that is able to parse a dependency graph from your code before it can spit out a bundled file. Depending on the bundler, you can sometimes experience slow compilation times because of this.

On a side note, one thing I find odd about the idea of asynchronous dependencies in JavaScript is that there are module loaders out there that are designed for loading modules asynchronously, yet we often chose to bundle the code into once package anyway. It seems like the preferred choice has always been bundled code, since it means fewer HTTP requests and less worrying about dependency availability. This was one of the things I definitely wanted to address with ImportJS.

Concern#2 – Module Organization

The second obstacle I’ve found in JavaScript is finding the “best” method to write modular code. In some other languages such as Java or ActionScript, you’re forced to use a class-based structure for your code with one specific entry point for your application. In JavaScript anything goes, so there has never really been a specific pattern that everyone “must” use. I actually consider this a great thing since it’s part of what makes JavaScript so flexible, but in my opinion the moment you perform logic within the root scope of your module you’ve already made a mistake. So while it’s awesome that JavaScript doesn’t boss you around, I feel it leaves the developer open to some poor coding patterns that hurt scalability.

My Module Principles

I’ve narrowed down my main thoughts on solving the above to the following 3 principles I now use for myself when writing JavaScript applications. These principles obviously don’t fit the needs of every application, but I’ve used them frequently enough where I can comfortably say they should cover the overwhelming majority of use cases:

  1. A JavaScript application should have a single entry point.
  2. External (third-party) JavaScript dependencies should always be resolved, loaded, and executed before your application code.
  3. Internal JavaScript dependencies should be resolved and loaded prior to any logic execution /entry point.

Principle #1 in particular is a common feature of many module loaders which need to know how to initialize your application once its dependencies are resolved.

As for Principle #2, I’ve seen most module loaders handle this in such a way that you must “shim” third-party modules that don’t fit the loader’s criteria. This way you can reference those third-party dependencies as if they were native to your application, and the load order can be determined while resolving the dependency graph. I’ve found in my experiences that a big benefit of doing that is when you need the ability to include multiple versions of the same third party library in your application, because you can create aliases for each unique version. Though if you do find yourself needing multiple versions of a library in your application, you might want to think twice about how you’re implementing things. In any case, I came to the conclusion that it’s far cleaner and straightforward if all third party dependencies were already resolved before your core application code, which is one of the things that ImportJS handles naturally.

Then finally Principle #3 is where ImportJS strays away from typical module loaders. Let’s take a look at how.

 Your Typical Module Loader

  1. Load a config
  2. Load the entry point source file
  3. Parse AST of loaded source file for required dependencies
  4. For each dependency in this source file that has not been loaded yet, start loading it
  5. Repeat Steps 3 through 5 until all dependencies have been met for a particular source file’s code
  6. Execute modules immediately the moment their dependency requirements are met (entry point is naturally last)

Note: The above list is for asynchronous loading. If the code was bundled in one package then similar logic is used at compile time, but you can assume the fetch time for a dependency would become negligible.

ImportJS Module Loading

  1. Load a “config” (optional, may not even be necessary)
  2. Load all of your module files in whatever order (If bundled it’s almost instant, if async it loads dependencies as it finds them via Regex )
  3. Execute all of your modules now that they have all been loaded
  4. Initialize application entry point

This might seem confusing at first, but the key difference between these two approaches is specifically step #6 from typical module loaders. But in order to understand this, I want to draw a clear distinction between loading a module, executing a module, and initializing part of a module.

  •  loading a module means downloading the source code, but not yet evaluating it
  • executing a module means evaluating (running) the module’s source code
  • initializing part of a module means executing specific logic within a module (whether that be “new Thing()”, or calling “doSomething()”.

With your typical module loader, the loading and execution steps are extremely coupled, and this is because the completion of loading a module’s dependencies will result in the immediate execution of the code inside of that module. What bothers me about this is that I like to think of an application as a single entity. Yes that entity might be made of smaller parts, but I don’t want those smaller parts to do anything until I know they have all finished loading.

The example I’m going use to demonstrate why this is an issue is about the dreaded circular dependency concept, which requires ugly work-arounds in some loaders. Now before you get alarmed, I am not here to promote circular dependencies. However I think it’s import to acknowledge why this can’t be handled easily in other module syntaxes, and it’s the perfect way to demonstrate how ImportJS naturally solves this if you follow some basic guidelines. Let’s see a simple circular dependency with AMD syntax using a “brother” and “sister” object that depend on each other’s existences:

define('main', ['brother', 'sister'], function (brother, sister) {
  console.log('RequireJS: Loaded main');

  return {
    toString: function () {
      return "RequireJS: main, " + brother.name() + ", " + sister.name();
define('sister', ['brother'], function (brother) {
  console.log('RequireJS: sister is ready', brother);

  return {
    brother: brother,
      name: function () {
          return "RequireJS: sister [" + brother + "]";
define('brother', ['sister'], function (sister) {
  console.log('RequireJS: brother is ready', sister);

  return {
    sister: sister,
    name: function () {
      return "RequireJS: brother ["+ sister + "]";
require(['main'], function (main) {
  //Entry point for RequireJS

If you executed the above code with RequireJS, you’re going to get a nice pretty “RequireJS: sister is ready undefined” in your browser console. Why is this? It’s because both “brother” and “sister” require each other as dependencies, but AMD syntax cannot return one of the module references until the other has finished executing. As a result, regardless of the load order of “brother” and “sister” only the one that was loaded first will have access to its sibling.

To elaborate – if “brother” was to be loaded first, it needs “sister” before it can enter its execution state. This forces “sister” to enter the execution state before “brother” gets a chance to complete, resulting in “sister” containing an undefined reference to her sibling.

Now let’s talk about why this is a non-issue with ImportJS using the same brother-sister example:

ImportJS.pack('main', function (module, exports) {
  var brother = this.import('brother');
  var sister = this.import('sister');

  module.exports = {
    toString: function () {
      return "ImportJS: main, " + brother.name() + ", " + sister.name();
  console.log('ImportJS: loaded main');

ImportJS.pack('sister', function (module, exports) {
  var brother;
  this.inject(function () {
    brother = this.import('brother');
    console.log('ImportJS: sister is ready', brother);

  module.exports = {
    name: function () {
      return "ImportJS: sister [" + brother + "]";

ImportJS.pack('brother', function (module, exports) {
  var sister;
  this.inject(function () {
    sister = this.import('sister');
    console.log('ImportJS: brother is ready', sister);

  module.exports = {
    name: function () {
      return "ImportJS: brother [" + sister + "]";


You will not get a single undefined anywhere in your console output. The reason for this is what I call deferred dependency injection. In simpler terms, this means delaying dependency resolution until after the execution step, but before the initialization step. This takes place through the “this.inject()” mechanism in ImportJS, which is nothing more than a function callback that gets executed only at the time you call ImportJS.compile() (i.e. pre-unpack()). Without this feature, you’d get the same issue as the AMD example.

The compile() function will go through all of the modules you have stored, and execute them one by one. Any import() calls outside of an inject() function will be resolved immediately. But any code within the inject() function is delayed until every single module has safely completed its execution step. The end result of this is having completely resolved dependencies at the beginning of application run-time without any hacks!

But despite this feature, please don’t go littering your applications with circular dependencies! They are still considered an anti-pattern in the dev community and should be used sparingly.


Now, there is one last thing to keep in mind with all of this and that’s the fact that none of this will be useful without following some sort of guidelines. So here’s how I prefer/recommend writing modules under ImportJS (and in general really):

  1. Modules should export 1 “class”, and 1 class only (can be a function or object, though a function is preferable)
  2. Modules should never execute any logic outside of the class’s definition (Let the functions in the class do the setup, not the module)
  3. Module dependencies should always be resolved in the inject() phase (except in the case of a parent-child relationship, where the child needs the parent to be extended)
  4. Write your modules in individual files for development and concatenate them to test rather than loading them through the preload() feature
  5. Have a single entry point for the application, preferably inside of a class constructor if you want to follow a more traditional practice
  6. Separate concerns for 3rd party dependencies by either loading them separately into the global namespace, or creating ImportJS wrappers/plugins.

If you follow these above steps and keep your modules nice and compact, writing JavaScript with ImportJS should start to resemble that of a more traditional application. I think the patterns used here have been heavily battle-tested over the years in other platforms, which to me warrants giving it a shot in the JS world. I certainly hope that you’ll consider giving ImportJS a try in projects of your own 🙂