JavaScript is now used for advanced web applications, rich user interfaces, and single page apps. Ensuring high-quality JavaScript code requires enforced coding guidelines, automated and manual testing, measuring code quality, and accountability. Key techniques include code reviews, static analysis, unit tests, and visibility of quality metrics.
2. JavaScript Use In 2003
Form validation
Custom cross-browser code to work
around differences in DOM
Basic page manipulation
3. Replacing Flash
Advanced User Interface Components
Single Page Web Apps
Working around browser vendor prefixes
Data connections to cross-domain
third-party web services
Canvas API
HTML5 Media APIs
History API
Drag & Drop API
Managing Offline Application Cache
Local Storage APIs
WebRTC
Web Sockets API
Web Workers
Social Media Integration
ModernizrjQuery
Zepto
Grunt
RequireJS
postMessage API
Node.js
GeoLocation
Device Orientation,
Direction, and
Motion Events
Touch Events
Form validation
Web Audio
JavaScript Use In 2013
Parallax and Other Effects
Responsive Foreground Images
Polyfills
matchMedia API MV* Frameworks
CSS Animation & Transition Events
Full Screen API
17. Quality JS Comes From
A tight, focused team of experienced
user-interface developers with a decent amount of
time and an unchanging brief
Or does it?!
25. var Dates = (function($) {
"use strict";
function isMonday(dateObj) {
var inputDayOfTheWeek = dateObj.getDay(),
mondayDayOfTheWeek = 1;
// Check to see if the supplied date is a Monday
return (inputDayOfTheWeek === mondayDayOfTheWeek);
}
return {
isMonday: isMonday
};
}(jQuery));
26. var Dates = (function($) {
"use strict";
function isMonday(dateObj) {
var inputDayOfTheWeek = dateObj.getDay(),
mondayDayOfTheWeek = 1;
return (inputDayOfTheWeek === mondayDayOfTheWeek);
}
return {
isMonday: isMonday
};
}(jQuery));
27. /**
Utility methods for handling dates
@class Dates
@static
*/
var Dates = (function($) {
"use strict";
/**
Lets you know if a supplied date is a Monday
@method isMonday
@param {Date} dateObj date to test
@return {Boolean} true if supplied date is a Monday
*/
function isMonday(dateObj) {
var inputDayOfTheWeek = dateObj.getDay(),
mondayDayOfTheWeek = 1;
return (inputDayOfTheWeek === mondayDayOfTheWeek);
}
return {
isMonday: isMonday
};
}(jQuery));
28. /**
Utility methods for handling dates
@class Dates
@static
*/
var Dates = (function($) {
"use strict";
/**
Lets you know if a supplied date is a Monday
@method isMonday
@param {Date} dateObj date to test
@return {Boolean} true if supplied date is a Monday
*/
function isMonday(dateObj) {
var inputDayOfTheWeek = dateObj.getDay(),
mondayDayOfTheWeek = 1;
return (inputDayOfTheWeek === mondayDayOfTheWeek);
}
return {
isMonday: isMonday
};
}(jQuery));
37. describe("Dates module - isMonday method", function() {
it("Recognises 22 July 2013 as a Monday", function() {
var isMonday = Dates.isMonday(new Date("2013-07-22"));
expect(isMonday).toBe(true);
});
it("Knows 25 July 2013 is not a Monday", function() {
var isMonday = Dates.isMonday(new Date("2013-07-25"));
expect(isMonday).toBe(false);
});
});
44. Automated & Manual Testing
Configure Grunt To Run Static Code Analysis and Unit
Tests
Run Unit Tests Cross-Browser Via BrowserStack API
Use Selenium For Automated Integration Testing
Perform Manual, Cross-Browser Testing
51. Visibility & Accountability
Surface Quality Metrics Via Information Screens:
Project-Level Project Status
Department-Level Project Status Overview
Department-Level Project Action List
Some associated examples at: https://github.com/dennisodell/High-Quality-JavaScript-Code
Let ’ s go back 10 years to the Web as it was in 2003. The days before jQuery, Ajax, HTML5. JavaScript use was pretty basic.
Jump ahead to today and we ’ re relying on JavaScript more than ever, with new libraries springing up all the time. The launch of touch-driven devices and the renewed push by the W3C to bring web standards forward have given us a deluge of new considerations for our code. Plus we still need to handle the older browsers with polyfills and fallbacks, adding complication to our code.
How can we sum up the use of JavaScript in 2013?
JavaScript code is getting large and complex.
We need to write high-quality code. We can ’ t afford to have errors occur in such complex systems. A single error in the front-end can stop a user interface from responding entirely. That means code we can have confidence in - error free, bug free, efficient and performant, with no memory leaks
What stops us from writing the best code we can?
So what ’ s the opposite of this - how does good quality code get written?
Theoretically then this should be true - but how true is it? Truth is there a multitude of factors impacting on good quality code delivery: from what systems you ’ re interfacing with, to what day of the week it is, to what you ate for dinner last night - sickness can affect code quality!
This is what we do at AKQA to ensure high-quality JavaScript code...
Enforce. Test. Measure. Accountability. Feedback at every stage to developers.
Ensure code consistency across your files. Get a human to look over your code. Get a computer to look over your code. Prove your code works with unit tests.
Create function closures to ‘ sandbox ’ your variables and related code. This is known as the ‘ module ’ pattern. Pass any dependencies into this sandbox rather than referring to global variables. Return any internal variables or functions as properties and methods on the declared module name. Agree any naming conventions between your team members and stick to them.
Enforce ECMAScript 5 ’ s strict mode with “ use strict ” - this throws more errors for common coding mistakes in modern browsers - things like eval() use, or referring to undeclared variables. Allows you to pick up and fix bugs before they affect other parts of your code.
Don ’ t reinvent the wheel - if you ’ ve written a good quality module or function before, use it again. Declare all your variables together at the top of each function. Perform comparisons or type and value using === (avoid ==)
Remove any unnecessary comments from in your code that don ’ t really help.
Add structured documentation comments to your code using YUIDoc, JSDoc, or an equivalent. These strictly describe each part of your code, what it does, what its inputs are and what values it returns. YUIDoc and JSDoc will then auto-generate a documentation website for you based solely on these structured comments, which will help new developers get up to speed and understand your code from a high level.
Sometimes you might need to refactor your code in order to simplify the way your code is understood, to make it more maintainable and easier to work with. Don ’ t avoid this - do it as soon as you see the need rather than waiting until a later date.
If you run YUIDoc ’ s parser over your JS file, it will generate an HTML site based on your documentation that looks something like this
Crucible allows you to tag code for review and have fellow developers add comments to files and specific lines of code. You can then review, add a comment of your own, or adapt your code to suit the review feedback.
JSHint (and JSLint) allow you to perform an analysis on your code, without it actually running. It spots what look like errors in your code so you can fix them early before they cause any issues - things like undeclared variables, or variables declared but not used (good for spotting spelling mistakes!).
Unit tests are a fairly new concept to JS developers, but familiar to those of many other languages. A unit test is a small function that calls one of the functions in your code with a set of known inputs and checks that the output of each call is what was expected.
We use Jasmine, but there are many other unit test frameworks out there. You create a HTML file, include the Jasmine library, the JS file you wrote that you wish to test, and the JS file you wrote your unit tests in. It then runs the tests automatically and shows you the results.
Assuming the tests pass, you will see the green bar at the top of the screen, together with a list of the individual tests that ran and their results.
Here ’ s an example unit test for our isMonday method. Groups of tests in Jasmine are wrapped in a call to its ‘ describe ’ method, and individual tests are wrapped in a call to its ‘ it ’ method. You can then describe each test with a string before containing the test itself within a function. The unit tests execute the isMonday method with known inputs, and check the output is as expected with the ‘ expect ’ and ‘ toBe ’ methods of Jasmine. Unit tests prove that your code works as expected, which gives you and your team confidence in it.
We use three main tools for testing. The first is Grunt, a JS task runner built on Node.js. It can be configured to run JSHint and Jasmine unit tests automatically via plugin ‘ tasks ’ . This gets run on the developer ’ s local machine before their code is committed.
The second tool we use for testing is BrowserStack. This is a site that spins up virtual machines on the fly which contain only browsers and dev tools. You can select an OS and a browser, give it a URL, and it ’ ll spin up a VM which you can then interact with as if you were using the real thing. Desktop & mobile OSes are supported, and it allows the creation of a ‘ tunnel ’ through which you can run local sites from your machine within their VMs. Very handy for mobile and old IE testing!
BrowserStack also have an automated service API for testing JavaScript code with. It allows you to script up a number of VMs that connect to your unit tests for running automatically across a wide selection of browsers. They then pass you back the results for you to parse and check for errors. This is good for running on a build server.
Finally, the third tool we use for testing is Selenium - this allows you to script interaction testing in a browser, allowing you to test behaviour at a high level. E.g. if I click in one place, does a modal window popup somewhere else? This is best run on a build and/or staging server.
So we use Grunt, BrowserStack and Selenium in addition to manual, cross-browser checks to create a quadruple-lock on our testing process, right from the local development stage all the way up to directly on the pre-release server.
Enforcing coding guidelines together with automatic and manual testing give a really strong foundation of quality. To take it further you really need to be able to find a measure of quality - get a number that you can work at to improve, which reflects how good your code is.
Here at AKQA we use a system called SonarQube to store snapshots of our code and its quality. It gives us an overview of each of our projects, which it process the code of to produce metrics by which we can measure the quality of our code.
Compliance against a set of language rules stored in SonarQube. e.g. commenting out lines of code, whether lines end in a semicolon, etc. Coverage is how much of our original code was executed by our unit test results. The higher then number the more confident we can feel that our code is properly running and tested. We get the %age measurement by using a JS tool called Istanbul, which creates a new version of our original JS files which wraps our existing method calls and can then increment a counter when each is called. We then run our unit tests against this ‘ instrumented ’ file and it produces our code coverage figure. We ’ re experimenting now with a new metric - Cyclomatic complexity. This is a number representing how ‘ complex ’ a code file or function is in terms of how many code branches and function executions it has. Writing smaller, unit-testable functions means this number should be low. There are Grunt plugins that produce the Istanbul code coverage reports, and the complexity reports.
Istanbul ’ s generated LCOV-format report for our isMonday unit tests - 100% coverage!
It ’ s no use taking measurements if no-one sees them. So we ’ re surfacing this information right up on the walls of our office. That way we can identify problems on projects and all chip in to help improve quality.
TV screens around our offices. In project areas, and in the area where the department heads sit. We surface both absolute metrics and trends over time.
Individual project screen. Show overall project quality, a snapshot of each metric and trends for those metrics over time.
Department overview screen is a bubble plot (in background), with a overlay ‘ sticker ’ highlighting each project ’ s metrics in turn, on rotation.
The project list screen is a project dump straight from SonarQube with our traffic light statuses applied to it.
So how we ensure code quality at AKQA?
By enforcing coding guidelines, running automated and manual tests, measuring our code and surfacing quality metrics, and feeding back to developers at all stages, we ’ re able to ensure we have the highest confidence in our code.