Implementing a Mini-React-Redux Framework on a Django Page

Introduction

I have built several production web applications using React and Redux and generally have had an excellent experience with those technologies.  One of React’s greatest assets IMO is it’s ability to integrate into all kinds of stacks and setups but still play nice with the other kids.  That was something that impressed me back in Spring 2014 when I first used React.  We got React running in the jQuery spaghetti code of a massive, legacy Ruby on Rails application with incredibly little effort and huge productivity benefits to the team.  Redux is also incredible for the amount of good it does you with so little code.

There are lot’s of blogs and tutorials on how to build a full single-page application (SPA) complete with client-side routing, persistent state, and even server-side rendering to boost that time-to-interactivity metric.  What if I don’t need that?  What if I already have a site built using and “old-school” server-side framework like Ruby on Rails or Django, but I have one specific page that should be highly interactive and need something more robust than simple jQuery?  React and Redux could still be hugely beneficial, but how do I do it without (a) getting bogged down in boilerplate or (b) over-engineering the solution?

Mini React-Redux Framework to the rescue!

Ready, Set, Go!

Let’s make the skeleton of a super, tiny JavaScript framework that can fit our use case for a Django website.

Here are the steps we’ll follow:

  1. Setup Webpack with Django.
  2. Install our client dependencies
  3. Implement the Mini React-Redux Framework

Setup Webpack with Django

For this step, we are going to use the django-webpack-loader tool to give us the power to load Webpack bundles onto a templated page.  The setup is very simple if you have a vanilla Django application; just follow the loader tutorial.  If you are using Django-Mako-Plus add-on, supplement the regular loader tutorial with my own little tutorial.

Install our client dependencies

The following are the NPM dependencies I am relying on:

{
  "dependencies": {
    "babel-core": "~6.3.26",
    "babel-loader": "~6.2.0",
    "babel-preset-es2015": "~6.3.13",
    "babel-preset-react": "~6.16.0",
    "react": "~15.4.2",
    "react-dom": "~15.4.2",
    "redux": "~3.6.0",
    "redux-logger": "~2.7.4",
    "redux-thunk": "~2.2.0",
    "webpack": "~1.13.2",
    "webpack-bundle-tracker": "0.0.93"
  }
}

Include these dependencies in your package.json and run npm install.

Implement the Mini React-Redux Framework

Here is the source I came up with for the mini framework:

import React from 'react'
import ReactDOM from 'react-dom'
import { createStore, applyMiddleware, compose } from 'redux'
import thunk from 'redux-thunk'
import createLogger from 'redux-logger'
import MyComponent from './components/MyComponent'

/**
 * Redux Reducer.
 * @params:
 *  - state: the previous state of the store
 *  - action: an object describing how the state should change
 * @returns:
 *  - state: a new state after apply appropriate changes
 */
const rootReducer = (state = { clicks: 0 }, action) => {
  // ... change state based on action
  return state
}

/**
 * Redux Store object with three functions you should care about:
 *  - getState(): returns the current state of the store
 *  - dispatch(action): calls the reducer with a given action
 *  - subscribe(): called after a reducer runs
 *
 * The store has two optional middlewares to showcase how you would add them:
 *  - redux-thunk: allows `store.dispatch()` to receive a thunk (function) or an object
 *                 See http://stackoverflow.com/questions/35411423/how-to-dispatch-a-redux-action-with-a-timeout/35415559#35415559
 *  - redux-logger: logs out redux store changes to the console. Only in dev.
 */
const middlewares = process.env.NODE_ENV === 'production'
    ? applyMiddleware(thunk)
    : applyMiddleware(thunk, createLogger())
let store = compose(middlewares)(createStore)(rootReducer)

/**
 * Helper function to render the Gradebook component to the DOM.
 * Makes the following props available to the Gradebook component:
 *  - storeState: an object of the latest state of the redux store.
 *  - dispatch: a function that dispatches actions to the store/reducer.
 */
const render = (nodeId, component) => {
  let node = document.getElementById(nodeId)
  ReactDOM.render(<component storeState={store.getState()} dispatch={store.dispatch} />, node)
}

/**
 * Function that bootstraps the app.
 *  - render the component with initial store state.
 *  - re-render the component when the store changes.
 */
const start = () => {
  render('app', MyComponent)
  store.subscribe(() => render('app', MyComponent))
}

To start the application just call the start function when the page loads. Here’s an example using jQuery:

$(() => start())

Explanation

This little proof-of-concept is interesting to me because of how much usefulness it provides with so little code.  With this code, we create a Redux store with some basic middlewares and a reduce that does nothing interesting (yet).  Then we render a component to the DOM, giving it the current store state and a function for the component to dispatch actions if necessary, and setting up a store subscription so that the component will be re-rendered whenever the store changes.

Another cool part about this approach is that a lot of the setup code can be pulled out and made reusable.  The render(), start(), and store setup would probably be the same for every Mini App we would create.  Then we could simplify this whole file down to just the reducer and passing in the node and component to the start function (not implemented here).

Conclusion

With very little effort and Boilerplate, we have a React application using Redux as it’s storage system.  With this in place, you can build quite sophisticated widgets and still have the flexibility to get more complex if you need to do something more involved.

Implementing a Mini-React-Redux Framework on a Django Page

Adding Webpack Bundles to your Django-Mako-Plus (DMP) Site

This post describes how to hook up Webpack to a Django site using the django-webpack-loader tool in the special case where your Django site is running the Django-Mako-Plus (DMP) library.

Why Webpack?

In the last few years, the ecosystem of JavaScript build tools has grown in both size and quality.  One of my favorite build tools is Webpack.  If you have not heard of it, I highly recommend it to you for bundling your JavaScript, CSS, and other static assets.  To get the most out of this post, please go do a little cursory research on the use case of the webpack bundler before continuing on here.

I also appreciate the Django framework for building dynamic web applications in Python. If you would like to use Django with Webpack, it takes a little extra work to get things hooked together in a clean, scalable way.  Webpack outputs “bundles” that can be formatted in many ways (Common.js, UMD, Require.js, etc.) depending on how they should be consumed and can even output the bundles with the md5 hash in the name to improve the caching of your bundles on the internet.

What is “django-webpack-loader”?

Django, for all its great features, handles static files poorly by modern standards which is where the django-webpack-loader (hereafter referred to as “the loader”) tool comes in.  It provides a way to load a webpack bundle by name into a Django template by mapping the webpack bundle’s “logical name” (e.g. main) to it’s filename (e.g. main-be0da5014701b07168fd.js) which filename changes whenever the contents of the bundle change.  To learn how the loader works, read the documentation and tutorial.

DMP with The Loader

The loader integrates with the templating system of Django.  If you are using Django-Mako-Plus (DMP), you replaced the default templating engine with Mako so the prepared require_bundle helper is not available anymore.  Lucky for us, Mako is so powerful that we can import python functions with ease.  All we need to do in a template is import the right function and call it using Mako syntax:

<%! from webpack_loader.templatetags.webpack_loader import render_bundle %>
<html>
  <head> 
    ${ render_bundle('main') }
  ...

Simple! We can even simplify this a bit by adding the import statement as a DEFAULT_TEMPLATE_IMPORTS for our Mako templates like so:

TEMPLATES = [
  {
    'BACKEND': 'django_mako_plus.MakoTemplates',
    'OPTIONS': {
      # Import these names into every template by default
      # so you don't have to import them explicitly
      'DEFAULT_TEMPLATE_IMPORTS': [
        'from webpack_loader.templatetags.webpack_loader import render_bundle',
      ]
    }
  }
]

BAM!

Conclusion

All done!  You are now ready to start using the django-webpack-loader to include Webpack bundles in your Django-Mako-Plus website!

 

Adding Webpack Bundles to your Django-Mako-Plus (DMP) Site

Defending JavaScript

In a forum thread, a member brought up the following critiques of JavaScript.  I quickly recognized many of these arguments as ones I have heard before and really wanted to address. In my attempt to not hijack the thread (which was not about “To JavaScript, or Not To JavaScript”), I collected my thoughts here.  This is not meant to be a passive-aggressive post, but rather as an aboveboard rebuttal in a logical discussion.

The Argument Against JS:

…(skipped for brevity)…

I think Javascript has gone a long ways since [I] first started using it, but here are some issues that I have had with it over the time I have used it:

  1. Javascript has a lot of “cute” and “neat” tricks in it, and I feel like people abuse those tricks constantly. In Python there are some cute tricks that you can use, but the community tends to frown upon it.
  2. NPM is such a good and bad experience. Compared to some other package systems its kind of messy. The other issue is that they have this
    “microservice” where instead of writing a one line piece of code they instead pull from NPM to get the same thing done. There was an issue a few months ago where one developer removed his package that thousands of people depended on, and it caused a “dependency hell” per se.
  3. Documentation is lousy on almost all javascript projects. The documentation tools for javascript projects are pretty lousy. When you have worked with docutils/sphinx for python you start to wonder what is wrong with the javascript documentation process.
  4. Lack of stability – this is getting better, but still pretty lousy at times. Almost all javascript projects including node has this issue. Everybody is so into “progressing” the platform that they push
  5. Too many kludges to make it imitate OO
  6. Poor unittesting tools.

My Defense of JavaScript

The arguments above use Python as a language reference point. That’s great! I have used Python for years and love it as well; so I will focus on comparing JS with Python.

1. (Terrible) Language Tricks

Every language has “cute and neat tricks” (read terribleness). JavaScript has more than some languages, but also less than other languages. Many of the “terrible things” in JS are a result of how it runs in the browser (DOM, globals, namespacing) rather than language problems (although JS itself has some bad ones)

The BIG difference with JS you are forgetting is that almost every language can break compatibility relatively freely (e.g. Python 3, Ruby 2, Lua [every version]). JavaScript can’t, because it would break the internet. Bad, wrong-headed decisions can not be removed once people start using them. Websites need to stop using features before they can be removed. Very few other languages have these strict deprecation requirements.

Python has it’s share of weirdness that people use and had a rather large set of breaking changes with Python 3.  Let’s read from the official sources about Python 3:

There are more changes than in a typical release, and more that are important for all Python users. Nevertheless, after digesting the changes, you’ll find that Python really hasn’t changed all that much – by and large, we’re mostly fixing well-known annoyances and warts, and removing a lot of old cruft.

Python had (still has) a big problem with Python 2.7 -> Python 3 upgrades.  Even now, years later, many projects still rely on 2.7 and haven’t upgraded. That situation would never work on the world wide web!

2. NPM

Your argument is against how people have used NPM, not against NPM itself. And the issues you cited are definitely problems, but they are problems for every package manager that reaches the nexus of popularity and ease-of-use. RubyGems had the exact same problem with excessive “micro-gems” 10 years ago.

And that NPM “left-pad” issue that broke everything earlier this year … yeah that could still happen on NPM, PyPi, RubyGems, INSERT_PACKAGE_MANAGER.  Nothing special/bad about NPM made it happen.  Just that a guy decided to be a jerk and remove a package that everyone depended on.

Compare PyPi to RubyGems and NPM and it makes sense why it was such a big deal for NPM: PyPi is notoriously fractured and wierd to publish on (hence smaller); RubyGems and NPM are notoriously easy (hence bigger). The problem of package management is a classic hard problem that nobody has figured out completely from JavaScript, Python, and Ruby to widely used Linux distributions.

3. Docs

JavaScript sucks because a lot of projects don’t write good docs? Not sure I follow the argument. But if you wanted to argue it, you could easily attribute that to the massive number of JavaScript projects vs. Python projects. JavaScript actually has many excellent automated tools for documention.

4. Instability

Patently false. Microsoft, Google, Facebook, Walmart, Mozilla, etc. have poured so much time, effort, and money into the JS ecosystem (specifically Node.js, NPM, and JS Engine implementations) that it has become one of the most stable platforms you can be on. And don’t forget the JS language guarantee that language features can’t be removed until most websites stop using them. Even among browsers, the consistency of good implementation of JS features is at an all time high.

Any instability in Node.js specifically has been largely mitigated with the new release process (Stable and Current distributions). The only “instability” to speak of is the massive volume of updates that V8 goes through to keep up with ECMAScript features, and those only matter to the native library maintainers not using node-gyp (which most use afaik). And even then, Google and Microsoft now work closely with Node.js maintainers to help with API changes in their JS engines.

5. Not OOP

Everything in JS is an object. It just doesn’t use Classical OO. Your argument sounds more like you mean Classical OO vs. Prototypal OO. JS is the latter, and it is just as Object-oriented as Classical; but fundamentally different because Prototypes are objects as well and can be changed at runtime.  To help people wrap their heads around prototypes, ES6 even introduced the keyword class (although I’m not a big fan of it).  In some ways, prototypes are a more true and powerful kind of OOP.  Classes were introduced after OOP landed and primarily to help with static-type checking, not to enable better OOP design.

In the end, JavaScript is multi-paradigm just like Python with a mix of Object-oriented, Functional, and Imperative paradigms.

6. No Unit Testing

Patently false. You will undoubtedly find many testing libraries of very high quality in JS.  And if you want to talk culture of testing in a community; in my experience, in a room with a Rubyist, Pythonista, and JavaScripter, the Python guy is the least likely to be writing tests.

Conclusion

JavaScript as a language, with all the warts and weirdness, is easily one of the fastest evolving languages in the world.

10 years ago, who would have thought this about JS:

  1. Most widely used programming language on earth
  2. One of the fastest scripting languages ever
  3. Largest package ecosystem ever (npm)
  4. Popular as a server backend language

I once shared your disdain of JavaScript, but with the recent incredible work being done on the language itself, it has become one of my favorites.

I’ll close with this slide by Brendan Eich, the creator of JavaScript:

Screenshot 2016-09-01 17.53.16

Defending JavaScript

Forms to Emails using AWS Lambda + API Gateway + SES

When deploying static websites, I am not a fan of provisioning servers to distribute them.  There are so many alternatives that are cheaper, simpler, and faster than managing a full backend server: S3 buckets, Content-Delivery Networks (CDNs), etc.  But the catch with getting rid of a server is now you don’t have a server anymore!  Without a server, where are you going to submit forms to?  Lucky for us, in a post-cloud world, we can solve this!

In this post, I will describe how AWS Lambda and API Gateway can be used as a “serverless” backend to a fully static website that can submit forms that get sent as emails to the site owner.

Important Note

This is merely a demonstration.  For simplicity, I do not explain important things like setting up HTTPS in API Gateway but I certainly recommend it.  Also, be careful applying this solution to other contexts.  Not all data can/should be treated like publicly submittable contact form data. Most applications will require more robust solutions with authentication and data stores. Be wise, what can I say more.

Prerequisites

  • AWS Account

The project is a simple static marketing website.  Like most business websites, it has a “Contact Us” page with a form that potential customers can fill out with their details and questions.  In this situation, we want this data to be emailed to the business so they can follow-up.  This means we need an endpoint to (1) receive data from this form and (2) send an email with the form contents.

Let’s start with the form:

<form id="contact-form">
  <label for="name-input">Name:</label>
  <input type="text" id="name-input" placeholder="name here..." />

  <label for="email-input">Email:</label>
  <input type="email" id="email-input" placeholder="email here..."/>

  <label for="description-input">How can we help you?</label>
  <textarea id="description-input" rows="3" placeholder="tell us..."></textarea>

  <button type="submit">Submit</button>
</form>

And because API Gateway is annoying to use with application/x-www-form-urlencoded data, we’re just going to us jQuery to grab all the form data and submit it as JSON because it will Just Work™:

var URL = '<api-gateway-stage-url>/contact'

$('#contact-form').submit(function (event) {
  event.preventDefault()

  var data = {
    name: $('#name-input').val(),
    email: $('#email-input').val(),
    description: $('#description-input').val()
  }

  $.ajax({
    type: 'POST',
    url: URL,
    dataType: 'json',
    contentType: 'application/json',
    data: JSON.stringify(data),
    success: function () {
      // clear form and show a success message
    },
    error: function () {
      // show an error message
    }
  })
})

Handling the success and error cases are left as an exercise to the reader 🙂

Lambda Function

Now lets get to the Lambda Function! Open up the AWS Console and navigate to the Lambda page and choose “Get Started Now” or “Create Function”:

Screenshot 2016-04-05 11.36.59

On the “Select Blueprint” page, search for the “hello-world” blueprint for Node.js (not python):

Screenshot 2016-04-05 11.39.42

Now, you create your function.  Choose the “Edit Code Inline” setting which will have a big editor box with some JavaScript code in it and replace that code with the following:

var AWS = require('aws-sdk')
var ses = new AWS.SES()

var RECEIVER = '$target-email$'
var SENDER = '$sender-email$'

exports.handler = function (event, context) {
    console.log('Received event:', event)
    sendEmail(event, function (err, data) {
        context.done(err, null)
    })
}

function sendEmail (event, done) {
    var params = {
        Destination: {
            ToAddresses: [
                RECEIVER
            ]
        },
        Message: {
            Body: {
                Text: {
                    Data: 'Name: ' + event.name + '\nEmail: ' + event.email + '\nDesc: ' + event.description,
                    Charset: 'UTF-8'
                }
            },
            Subject: {
                Data: 'Website Referral Form: ' + event.name,
                Charset: 'UTF-8'
            }
        },
        Source: SENDER
    }
    ses.sendEmail(params, done)
}

Replace the placeholders for RECEIVER and SENDER with real email addresses.

Give it a name and take the defaults for all the other settings except for Role* which is where we specify an IAM Role with the permissions the function will need to operate (logs and email sending). Select that and Basic execution role which should pop-up with an IAM role dialog. Take the defaults but open the “View Policy Document” and choose “Edit”. Change the value to the following:

{
    "Version":"2012-10-17",
    "Statement":[
      {
          "Effect":"Allow",
          "Action":[
              "logs:CreateLogGroup",
              "logs:CreateLogStream",
              "logs:PutLogEvents"
          ],
          "Resource":"arn:aws:logs:*:*:*"
      },
      {
          "Effect":"Allow",
          "Action":[
              "ses:SendEmail"
          ],
          "Resource":[
              "*"
          ]
      }
    ]
}

The first statement allows you to write logs to CloudWatch. The second statement lets you use the SES SendEmail API. With the IAM Role added, we will move to setting up the API Gateway so our Lambda function will be invoked by POST’s to an endpoint.

API Gateway Setup

The process for configuring API Gateway is as follows:

  1. Create an API
  2. Create a “Contact” resource
  3. Create a “POST” method that invokes our Lambda Function
  4. Enable CORS on our resource

Open up the API Gateway in the Console:

Screenshot 2016-04-05 11.56.05

Select the “Get Started” or “Create API” button.  Give the API a useful name and continue.

Now we will create a “Resource” and some “Methods” for our API.  I will not walk you through each step of the process because the GUI is a little tricky to explain, but the process is fairly straightforward.

Using the “Actions” dropdown, “Create Resource” name it something like “Contact” or “Message”.  Then, with the resource selected, use “Actions” to “Create Method”.  Choose a POST.  Now we will connect it to our Lambda Function:

Screenshot 2016-04-05 12.05.18

Once you save this, you will need to Enable CORS so that your code in the browser can POST to this other Domain.  Choose your resource > Actions > Enable CORS.

Screenshot 2016-04-05 12.07.32

Just to be safe, I added a header to Access-Control-Allow-Headers that I believe jQuery sends on AJAX calls.  Just put this at the end of the comma-separated list: x-requested-with. I am also using the ‘*’ so that it is easy for local testing. For Production, you should add the actual domain name you will be running your website under.

Now your resources and methods should look something like this:

Screenshot 2016-04-05 12.00.34

The last step is to “Deploy API”.  It’s not too bad.  Just click through the screens and fill them out with stuff that makes sense to you.  The high-level overview is that you need to create a “Stage” and then whenever you make updates to your API, you “deploy” to a “stage”.  This means that you can deploy the same API to multiple stages and test out any changes on a “Testing” stage and if things are good, deploy to the “Production” stage.

At the end of “deploying”, they will give you an “Invoke URL”.  This URL is the root of your API.  To make requests to a resource just add the name to the end of the URL: “https://invoke-url/stage-name/resource&#8221;.   To POST to our “Contact” (or “Message”) resource and given an Invoke URL of https://1111111.magic.amazonaws.com/testing, you will make POST requests to https://1111111.magic.amazonaws.com/testing/contact.  Put this URL into the jQuery code as the value of var URL.

SES + Email Validation

We are using SES to send emails.  For testing, it restricts the email addresses that can “send” and “receive” messages to ones that have been “verified”.  It is very simple.  Just go to the SES page of the Console, choose Email Addresses > Verify New Email Address.  Do this for each email address you would like to “send as” and “send to”.

Try it Out

This should get you most of the way.  If everything worked out, you should be able to submit your contact form and then receive an email with contents.

Post questions in the comments if you hit any problems.  This is only a summary and pare down of the process I went through.

Update

Jeff Richards (http://www.jrichards.ca/) recommended an all-in-one HTML + JavaScript snippet.  Here is a Github Gist of that snippet: https://gist.github.com/tgroshon/04b94aee6331bb65f05f4e0d7ff2e8bd

Forms to Emails using AWS Lambda + API Gateway + SES

Header Files, Compilers, and Static Type Checks

Have you ever thought to yourself, “why does C++ have header files”?  I had never thought about it much until recently and decided to do some research into why some languages (C, C++, Objective C etc.) use header files but other languages do not (e.g. C# and Java).

Header files, in case you do not have much experience with them, are where you put declarations and definitions.  You declare constants, function signatures, type definitions (like structs) etc.  In C, all these declarations go into a .h file and then you put the implementation of your functions in .c files.

Here’s an example of a header file called mainproj.h:

#ifndef MAINPROJ_H__
#define MAINPROJ_H__

extern const char const *one_hit_wonder;

void MyFN( int left, int back, int right );

Here is a corresponding source file mainproj.c:

#include "mainproj.h"

const char const *one_hit_wonder = "Yazz";

void MyFN( int left, int back, int right )
{
    printf( "The only way is up, baby\n" );
}

Notice that the header only has the function definition for MyFN and it also does not specify what one_hit_wonder is set to. But why do we do this in C but not in Java?  Both are compiled and statically typed.  Ask GOOGLE!

A great MSDN blog post by Eric Lippert called “How Many Passes” was very helpful.  The main idea I got out of the article is that header files are necessary because of Static Typing.  To enforce type checks, the compiler needs to know things like function signatures to guarantee functions never get called with the wrong argument types.

Eric lists two reasons for header files:

  1. Compilers can be designed to do a single pass over the source code instead of multiple passes.
  2. Programmers can compile a single source file instead of all the files.

Single Pass Compilation

In a language like C#, which is statically typed but has no header files, the compiler needs to run over all the source code once to collect declarations and function signatures and then a second time to actually compile the function bodies (where all the real work of a program happens) using the declarations it knows about to do type checks.

It makes sense to me that C and C++ would have header files because they are quite old languages and the CPU and Memory resources required to do multiple passes in this way would be very expensive on computers of that era.  Nowadays, computers have more resources and the process is less of a problem.

Single file compilation

One interesting other benefit of header files though is that a programmer can compile a single file.  Java and C# can not do that: compilation occurs at the project level, not the file level.  So if a single file is changed, all files must be re-compiled.  That makes sense because the compiler needs to check every file in order to get the declarations.  In languages with header files, you can only compile the file that changed because you have header files to guarantee type checks between files.

Relevance Today

Interesting as this may be, is it relevant today if you only do Java, C#, or a dynamic language?  Actually it does!

For instance, consider TypeScript and Flow which both bring gradual typing to JavaScript. Both systems have a concept of Declaration files.  What do they do?  You guessed it!  Type declarations, function signatures, etc.

TypeScript Declaration file:

module Zoo {
  function fooFn(bar: string): void;
}

Flow Declaration file:

declare module Zoo {
  declare function fooFn(bar: string): void;
}

To me, these look an awful lot like header files!

As we see, header files are not dead!  They are alive and well in many strategies for Type Checking.

Header Files, Compilers, and Static Type Checks

Why you should be using Fig and Docker

This is a introductory article to convince and prepare you try setting up your web app development environment with Fig and Docker.

The snowflake Problem

Let me take a moment to lay some foundation by rambling about dev environments.  They take weeks to build, seconds to destroy, and a lifetime to organize.  As you configure your machine to handle all the projects you deal with, it becomes a unique snowflake and increasingly difficult to duplicate (short of full image backups).  The worst part is that as you take on more projects, you configure your laptop more, and it becomes more costly to replace.

I develop on Linux and Mac and primarily do web development.  Websites have the worst effect on your dev environment because they often (read always) need to connect a number of other services like databases, background queues, caching services, web servers, etc.  At any given moment, I probably have half a dozen of those services running on my local machine to test things.  It is worse when I am working on Linux, because it is so easy to locally install all the services an app runs in production.  I routinely have MongoDB, PostgreSQL, MySQL (MariaDB), Nginx, and Redis running on my machine.  And lets not even talk about all the python virtualenv’s or vendorized Rails projects I have lying around my file system.

Docker Steps In

Docker is an such an intriguing tool.  If you have not heard, Docker builds on the ideas of Linux container features (cgroups and namespace isolation) to create lightweight images capable of running processes completely isolated from the host system.  It is similar to running a VM, but much smaller and faster.  Instead of emulating hardware virtually, you access the host system’s hardware.  Instead of running an entire OS virtually, you run a single process.  The concept has many potential use cases.

But with Docker, you can start and stop processes easily without needing to clutter your machine with any of that drama.  You can have one Docker image that runs Postgres and another that runs Nginx without having them really installed on your host.  You can even have multiple language runtimes of different versions and with different dependencies.  For example, several python apps running different versions of Django on different or the same versions of CPython.  Another interesting side effect, if you have multiple apps using the same kind of database, their data will not be on the same running instance of your database.  The databases, like the processes, are isolated.

Docker images are created with Dockerfiles.  They are simple text files that start from some base image and build up the environment necessary to run the process you want.  The following is a simple Dockerfile that I use on a small Django site:

FROM python:3.4
MAINTAINER tgroshon@gmail.com

ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/

Simple right?  For popular platforms like Python, Ruby, and Node.js, prebuilt Docker images already exist.  The first line of my Dockerfile specifies that it builds on the python version 3.4 image.  Everything else after that is configuring the environment.  You could even start with a basic Ubuntu image and apt-get all the things:

FROM ubuntu:14.04

# Install.
RUN \
  apt-get update && \
  apt-get -y upgrade && \
  apt-get install -y build-essential && \
  apt-get install -y software-properties-common && \
  apt-get install -y byobu curl git htop man unzip vim wget

From there you can build virtually any system you want. Just remember, the container only runs a single process. If you want to run more than one process, you will need to install and run some kind of manager like upstart, supervisord, or systemd. Personally, I do not think that is a good idea.  It is better to have a container do a single job and then compose multiple containers together.

Enter Fig

Problem is, Docker requires quite a bit of know-how to get configured in this kind of useful way.  So, let’s talk about Fig.  It is created specifically to use Docker to handle the Dev Environment use case.  The idea is to specify what Docker images your app uses and how they connect.  Then, once you build the images, you can start and stop them together at your leisure with simple commands.

You configure Fig with a simple yaml file that looks like this for a python application:

web:
  build: .
  command: python app.py
  links:
   - db
  ports:
   - "8000:8000"
db:
  image: postgres

This simple configuration specifies two Docker containers: a Postgres container called db and a custom container built from a Dockerfile in the directory specified by the web.build key (current directory in this case).  Normally, a Dockerfile will end with the command (CMD) that should run it.  The web.command is another way to specify that command.  web.links is how you indicate that a process needs to be able to discover another one (the database in this example).  And web.ports simply maps from a host port to the container port so you can visit the running container in your browser.

Once you have the Dockerfile and fig.yml in your project directory, simply run fig up to start all of your containers and ctrl-c to stop them.  When they aren’t running, you can also remove them from fig by running fig rm although it seems to me that the docker images still exist, so you might also want to remove those for a completely clean install.

Conclusion

Once I learned about Docker and Fig, it is one of the first things I do on new web projects.  The initial configuration can take some time, but once you have it configured it pays for itself almost immediately.  Especially when you add other developers to a project.  All they need to have installed are Docker and Fig, and they are up and running with a simple fig up command.  Configure once, run everywhere.  Harness all that effort spent configuring your personal machine and channel it into something that benefits the whole team!

Why you should be using Fig and Docker

Why I did not like AngularJS in 2014

Edited Mar, 2015:  Previously titled “Why I Do Not Recommend Learning AngularJS”.  In retrospect, my arguments are superficial and likely apply to the specific situation I was in.  In addition, I was wrong that learning a new tech is wasteful.  Learning anything makes you better at learning and that is what we should all be trying to do.  Learn what you’re excited about!

tl;dr

Despite it’s good qualities, I did not enjoy learning AngularJS.  With all the available options of web frameworks (e.g. Ember, React, Backbone, etc.), Angular fell behind in the following three areas:

  1. Performance
  2. Complexity
  3. Non-transference of Skills

Introduction

A lot of people ask me what I think about AngularJS, so I wanted to take some time to collect my thoughts and try to explain it clearly and rationally.  The following is the result.

I would like to start by saying AngularJS has a lot of good qualities, or else not so many people would use it so happily.  It makes developers excited to do web development again and that is hugely important.

With that being said, I did not like learning AngularJS.  With all the available options of web frameworks (e.g. React, Ember, Backbone, etc.), Angular falls behind in the following three areas:

  1. Performance
  2. Complexity
  3. Non-transference of Skills

Performance

I normally do not like picking on performance flaws, especially when a conscious decision has been made to trade performance for productivity.  I can understand that trade-off.  I do Ruby on Rails 😉

However, Angular’s performance has such serious problems that it becomes almost unusable for certain features or whole applications.  The threshold of how much work you can make Angular do on a page before performance tanks is scary low!  Once you have a couple thousand watchers/bindings/directives doing work on a page, you notice the performance problems.  And it is not actually that hard to get that large amount of binding happening on the page.  Just have a long list or table with several components per row, each with healthy number of directives and scope bindings, and then add more elements as you scroll.  Sound like a familiar feature?

Again I’d like to say that performance is not that terrible of a problem to have, because new versions of a framework can (and almost always will) optimize around common, performance problems.  I do not think performance will be a long-term problem in Angular; but it is a problem right now.

Complexity

Of all the most popular front-end frameworks (Ember, React, and Backbone), Angular is the most complex.  Angular has the most new terms and concepts to learn in a JavaScript framework such as scopes, directives, providers, and dependency injection.  Each of these concepts are vital to effectively use Angular for any use case beyond trivial.

Ember is also quite complex, but the framework itself gives direction for project organization which mitigates some complexity.  Also Ember is better at mapping its concepts to commonly used paradigms which I will talk about in the next section.

With React, you can be productive after learning a few function calls (e.g. createClass() and renderComponent()), creating components with objects that implement a render() method, and setting your component state to trigger re-renders.  Once you wrap your head around what React is doing, it is all very simple.  My experience was after weeks with Ember and Angular, I still did not grok all the complexity or feel like a useful contributor to the project.  After a day with React, I was writing production quality UI with ease.

Non-transference of Skills

I have been a web developer for years now.  Not a lot of years, but a few.  My first dev job was in college building UI with jQuery, which I learned very well.  Then I remember my first job interview outside of school with a company that built web applications with vanilla JavaScript and no jQuery.  I got destroyed in the JavaScript portion of the interview because my jQuery knowledge mapped very poorly to vanilla JavaScript.  In fact, I would go so far to say that I knew next to nothing about JavaScript even after a year of extensive web development with jQuery.

Why didn’t my jQuery skills transfer?  Because my development with jQuery taught me a Domain Specific Language (DSL).  While DSL’s can improve productivity, knowledge of them will seldom transfer to other areas.  The reverse can also be true.  You could call this inbound and outbound transference.

Angular is like jQuery.  It has transference problems.  The most serious problem in my mind is that Angular suffers from both inbound and outbound transference problems.  Knowing JavaScript, MVC, or other frameworks was less helpful while learning Angular.  What I learned from doing Angular has not helped me learn other things.  But maybe that’s just me.

Conclusion

If you know Angular and are productive with it, great!  Use it.  Enjoy it.  Be productive with it.  I tried Angular, and it just didn’t do it for me.

If you are looking for a framework that is both scalable and flexible, look into React.  In my experience, it is the easiest to learn and plays the nicest with legacy code.  Iterating React into almost any project is quite easy.  Of all the frameworks, React is probably the easiest to get out of because all your application logic is in pure JavaScript instead of a DSL.  The strongest benefit I have seen when using React is the ability to reason about your app’s state and data flow.  If you want a high-performance and transferable application, I highly recommend React.

If you want the experience of a framework that does a lot for you, go for Ember.  It will arguably do more for you than even Angular.  As I have seen, the Ember team is also more responsible/devoted to supporting large-scale applications or corporate clients which require stability and longevity.  They are the clients who do not want to be rewriting their apps every other year.  The one drawback I have seen is that Ember prefers to control everything of your app and does not play nice with other technologies.  If you have substantial legacy code, Ember will be a problem.

AngularJS will be releasing 2.0 soon, and it will be completely different from Angular 1.x.  Controllers, Scopes, and Modules are all going away.   To me, that seems like realization by the Angular Core Team that some of those neo-logisms did not work out.

Why I did not like AngularJS in 2014