Fancy list numbering with CSS

It's not every day that I'm styling a list like:

Screen Shot 2017-11-09 at 11.57.25 AM.png

But today was. So I came across a wonderful piece on CSS Tricks that pointed me in the right direction.

On the list container (an <ol>), I added the CSS rule:

// let's pretend my <ol> has a class 'ol'
.ol {
  counter-reset: my-list-counter;

Which acts as a counter set/reset.

And on each item (<li>), I needed to increment the counter and set the content from the current counter result. I chose to format it as decimal-leading-zero (the second argument to counter; any valid list-style-style will do). All together it looks like:

.li:before {
  content: counter(my-list-counter, decimal-leading-zero);
  counter-increment: my-list-counter;

Link: practical flexibility in systems

Harry at CSS Wizardy writes (below) about building in safe guards to a system, versus excluding "forbidden" use cases. It resonated a lot with me, since I've been building a design system. Flexibility and pragmatism have been design principles from the beginning, so that we build a system that outlives us, or at least, any use case we can think of right now.

Link: "Airplanes and Ashtrays"

CSS grid auto placement in IE11

I found out the hard way that IE11's implementation of CSS Grid (the old spec) doesn't support auto placement. It was the one feature my project needed to maintain the same layouts in IE11 as the evergreen browsers.

So I came up with a "polyfill" script that finds grid containers, and on each of its rows, it derives their column spans and positioning, and sets their IE-specific css properties.

There are a few places to tweak if you want to drop this in your project – the logic for figuring out column spans checks for css classes, with each span having their own class, which is specific to my use case. And your project might not also care about manually adding gutters. My hope is that this will at least get you going in the right direction.

Code here.

Knowing when an element is in view

I started working on a lazy image loading component today. There are several pre-existing solutions on the internet, but nothing I found took a modern approach. That is, there's a new API called the Intersection Observer that makes in-view checking really simple.

First, you create the observer with a callback:

const observer = new IntersectionObserver(callback)

Then start observing a DOM node (e.g., one with ID my-element):'my-element'))

Now, when that element goes in or out of view, the callback (above), is fired with an array of Intersection Observer entries (one entry for each item you're observing).

So, when the callback gets called, you can check if the element is intersecting:

// in the callback
const myCallback = (entries) => {
  const myElementEntry = entries[0]

  if (myElementEntry.isIntersecting) {
    // it is in view, do something!

And when you want to stop listening, you can either remove the listener on that element:


Or you can completely remove the listener (and all entries), with disconnect:


Which is what I chose to do once my image element was in view.

Sample component on Github

A better solution for mocking CSS modules in jest

I'm using CSS modules and testing with Jest in my latest project, 'cuz they're the best. But I kept wanting more from the css mocking experience in unit tests. The recommended solution using identity-obj-proxy wouldn't do it for me, because I had lots of duplicate classNames (e.g.,, that led to ambiguity in unit tests.

I started by building a jest transform, then realized that was the wrong approach, because js files importing the css were being processed by babel-jest.

So, I went to looking back at moduleNameMapper in jest's config. I knew that I could manually read the directory of css files I wanted transformed, and build an object from that (with inspiration from Brent Jackson's css-to-object).

And I did it, and it worked! I ran it against an existing test suite to reveal any stress cases, and tweaked accordingly (looking at you breakpoints, pseudo classes, dashes in selectors).

It's up and public in its raw and newborn state as a gist, here. Aiming to open source it as a module once the dust settles.

Yey, making things work!

What we talk about when we talk about consulting

I am a software developer. I’m good at some things (looking at you, front end & javascript), fine at other things, and meh at a couple things. I also have decent people skills. So when the right opportunity at the right company presented itself, I became a consultant.

Consultants do lots of good things – provide advice, get their hands dirty as practitioners, observe how teams and processes work, suggest change when something needs tweaking. Sound grand? Well, maybe. But those are also just good skills to have as a professional. And especially as a professional software developer.

For me, consulting is not spelled with a capital C. It’s not a noble aspiration, or a platonic ideal. Consulting is doing the right things (working hard, communicating well, continuously improving) for the right reasons (from sincere concern, for a worthwhile change, to make you and your team’s work better). And that’s no different than being a good colleague and a hard worker. 

No matter what you call yourself, just make sure you're doing work you're happy with and being nice to others.

Open sourced: a utility for shallow rendering React components

After lots of copy/pasting a small utility that makes shallow rendering a React component a little easier, I've open sourced it. It's called renderShallow, and it's now on Github and npm as render-shallow.

My motivation for it came when I started noticing that in most of the React component tests I wrote, I simply wanted a shallow rendered component to test. The ShallowRenderer API is a little verbose, between the creation and the getting of the output. So I started abstracting that (the .output returned from renderShallow). When I found myself wanting to rerender the component, either because of state or props changes, I added the ability to both re-fetch the output (rerender), or render the element again with new props (rerenderElement).

Getting input values in a dynamic form

Today I was experimenting with how to get all the values in a form. The form was dynamic, so I didn't know the input and textareas that could be in it. So I turned to my old standby document.forms (supported in _all_ browsers!).

When I started iterating over the form (document.forms[formname]), which is an HTMLCollection, I noticed that all non-input or textarea elements were filtered out (even if an input was nested inside a label). That means I can easily map over the inputs, get their values, and do as I wish with them!

Consider the following form:

<form name="hey">
  <label><input name="innie" value="i am an input"/></label>
  <textarea name="textie">i am a text area</textarea>

In my javascript, if I convert it to an array, I can map over the form's inner elements and get values from them!!! Like so:

const mapInputNamesToValues = () => {
  const form = document.forms.hey
  const formAsArray = []
  return formAsArray.reduce((result, input) => {
    result[] = input.value
    return result
  }, {})

// note: you could convert to an array and reduce in one line
// via [], callback)
// but I chose to separate it for clarity.

Play with the demo below, or on Codepen:

Link city – style guide reading recommendations!

On a recent project, I did a lot of research on pattern libaries and modular components. They're too good not to share!

Native form validation in browsers

Today I stumbled on the DOM APIs `document.forms`, which holds a collection of all the forms on a given page. You can access individual forms, and do things like checking the validity of all its inputs. Super powerful!

Here's a demo on Codepen:

14 tips for better front end testing

Today Stride published a set of tips I put together for testing your front end code. I had some fun with it, by turning it into a Buzzfeed-style listicle.

Here it is in syndication:

1. Test output (how the DOM changes), not what happen behind the scenes (implementation)

Think about the user’s perspective and how they would interact with the component(s). Testing output allows you to change implementation, without affecting output. That means tests only break when there’s an actual functionality change.

Garth fears change

Garth fears change

2. Make reusable components that can be tested in isolation

They’re easier to test and makes tracking down changes faster.

Reduce, reuse, recycle

Reduce, reuse, recycle

3. Cut down on the amount of DOM rendering in tests

When setting up your tests, prefer `before` not `beforeEach` to cut down on repeat renderings. It can slow things down considerably.

Liz Lemon says "Let's do this."

Liz Lemon says "Let's do this."

4. Isolate mutations to tests contexts

Relying on other cases mutating state or DOM leads to brittle tests that are hard to refactor or reason about. Test blocks contexts should be self-sufficient. Resist the urge to share data.

Beyonce's Miss Independent

Beyonce's Miss Independent

5. Be explicit and don’t abstract too much setup or repetition

By writing out and repeating steps, it’s easier to scan the code while under stress.

The Beatles, repeating

The Beatles, repeating

6. Choose a tool that allows you to isolate cases for a faster feedback cycle.

Have and use the ability to isolate a test or tests, without running them all. It’s easier to focus and faster to find the change.

Forever alone cat

Forever alone cat

7. Turn on source maps, if applicable

Get pointed to the source code quicker, if you’re using a compiler.

What year is it? Also, RIP Robin Williams

What year is it? Also, RIP Robin Williams

8. Use a headless browser and rerun tests on file change

The key to TDD is speed, speed, speed.

A headless ghost spins a pole

A headless ghost spins a pole

9. Allow for in-browser debugging

For the times you just need that Chrome dev tools. (And maybe Firefox or Safari)

A really strange web browser

A really strange web browser

10. Have consistent names for your test blocks

File names, methods, modules

Belle swings across her library on a ladder

Belle swings across her library on a ladder

11. Make your `describe`s, `context`s, `it`s human readable

Mad lib style: [Module name], when [state], (and), it [behavior]

It's a mad, mad, mad lib world

It's a mad, mad, mad lib world

12. Include polyfills if testing on different browsers

Browsers behave differently, especially older versions of PhantomJS versus something like Chrome. A polyfill like es5-shim will get you to parity.

Show browser variations no mercy

Show browser variations no mercy

13. Use a pluggable expectation system for more meaningful assertions

For example, chai-jsx (or expect-jsx or jasmine-expect-jsx), if you’re using React, allows you to assert with JSX on a component. Same goes for libraries like immutable, sinon, bluebird.

Plug it in, plug it in

Plug it in, plug it in

14. Fail the build on console.error

A third party developer is politely telling you there’s been a horrible problem, without crashing your app (isn’t that all you can ask for in a client-side app?). This happens with prop type validation errors in React.

Leslie NO-OPE

Leslie NO-OPE

Repeating keys in vim on OSX

In vim, holding down a key can save you a lot of repeated effort. In newer versions of osx, holding down a key was disabled, to allow for a special character entry dialogue. In every blog post about disabling this feature, I found the blunt force approach of turning off it globally. It didn't sit well with me, because sometimes I do want to use special characters, and it's harder to do without the entry dialogue. 

The one weird trick to disable in vim? You can turn off the press-and-hold behavior for specific applications! For vim, it's:

defaults write org.vim.MacVim ApplePressAndHoldEnabled -bool false

(Enter that in your terminal)

If you need to do this for other apps, check out the listings inside ~/Library/Preferences, and remove the .plist from end of the title (e.g., org.vim.MacVim.plist).

Happy coding!


Deciding on new technologies

In my day to day as a software consultant, I often help evaluate new technologies (be it a framework, library, language). Over time, I noticed certain questions and thoughts continued to come up. Things like – how’s the community around it? How many people are using it? Is it easy to learn? And so on.

Below is my effort to give voice to all the subtle feelings, the small questions and thoughts and everything else that happens while researching.


It is important for a technology to have an engaged community or a strong interest in a community (i.e., React in front end dev circles). There are several ways to measure this, which include the following:

  • Social media engagement - People are tweeting and writing blog posts about this tech. Their tone would be one of excitement or intrigue. Others may write about how they are adopting the new tech.
  • Package downloads in a given month - Some package managers, like npm, give statistics on how many downloads a package will have over time. Anywhere in the thousands is good. For example, React and RxJS have ~1 million downloads per month.
  • Github stars and forks - Another measure of usage, and peer engagement
  • Amount of pull requests and their acceptance rate - Is the project actively being developed and improved? It’s also important to gauge how the maintainers regard changes. That will affect contribution, and by extension, adoption.
  • Amount of open issues - How many there are and what severity. This will give you a sense of how ready it is (see also: production worthiness). It is also important to gauge maintainer quality. Are they helpful and open, or closed and prickly? The project won’t get far with bad contributors.

Questions to ask

As a developer evaluating a new technology, there are a series of questions you may ask or that may be asked to you before adoption. The following questions should give you a good sense of whether or not the technology is the right choice now. The answer to each of these does not have to be a yes, but the majority of these questions should be answered positively.


  • What are the technology’s advantages over the current offerings?
    • Also, what holes does it fill, or how does it improve current offerings?
  • Could you solve the same problem with an existing piece of technology? (Put another way - what else could you do?)


  • Is the documentation good?
  • What does the project’s roadmap look like?
  • Will you write less or better code with this technology?
  • Does this technology align with the language or framework it is intended to be used with?
    • See their philosophies and design goals
  • Is the source code well tested?

Community engagement

  • What excites you about this technology?
  • Could you convince someone with no experience in this particular tech stack on this technology choice? (E.g., a seasoned ruby developer on webpack)


  • If the technology integrates into another system, do the two roadmaps and philosophies align?
  • Is there an escape hatch in case this technology does not work? (E.g., rendering HTML from React components)
  • Is it stable enough to use in production?
  • Will/has/could this technology require a lot of churn? (E.g., React in early versions)
  • How is the ecosystem surrounding this technology?

Extended usage

  • Besides its intended usage, can you see the technology being adopted in a different setting? (E.g., Redux in server-side apps)
  • Do you see yourself working with this technology in one week, one month, one year, five years?

Project adoption

  • How would you convince another developer to change to this technology?
  • What types of projects would this be useful for?
  • Is this easy to adopt into an existing project?
  • Is this easy for a junior to learn?
  • Could you introduce this to a skeptical team in isolated pieces? (See also: escape hatch)

Variables in npm scripts

Did you know you can have variables in your package.json? They can be very handy, especially when using npm scripts. Consider the following:

  "main": "index.js",
  "scripts": {
    "start": "node $npm_package_main"

It eliminates the need for copy-pasting. Any key can be referenced, beginning with "$npm_package_" and adding an underscore for every level you go down. Say you had a config object, where you stored reusable values, like a domain address, that's passed as an argument in a script:

  "main": "index.js",
  "config": {
    "domain": "localhost:8000"
  "scripts": {
    "start": "node $npm_package_main --domain $npm_package_config_domain"

Super useful!

PhantomJS 2 in Travis CI

I was continuing work on my web boilerplate (note: now moved under the stride-nyc org) when I decided to add Travis CI integration for builds.

Travis CI is triggered on pushes to certain branches and on pull requests. To configure it, you create a .travis.yml file in your repository. My tests are running using a forked PhantomJS 2 launcher with Karma, which makes running your tests against PhantomJS 2 really easy, while we wait for official support. So, what's the problem? The launcher works great locally, but doesn't work on Travis CI's Ubuntu machines.

After some Googling, I landed on a post by Mediocre Labs. It describes how to pre-install a custom built PhantomJS 2 binary on Travis CI. That part worked just fine, but I still found myself fighting with the PhantomJS2 launcher which wouldn't recognize the custom binary, as well as the PhantomJS v1 launcher, which requires an npm dependency on PhantomJS, which isn't an option.

I worked around it by explicitly pointing to the TravisCI custom PhantomJS binary in my build task. It nicely overrides the binary location for the PhantomJS2 launcher. So putting it all together, here's the build task:

env PHANTOMJS_BIN=/usr/local/bin/phantomjs karma start

And the .travis.yml config to download the custom PhantomJS 2 binary:

  - wget
  - tar -xjf phantomjs-2.0.0-ubuntu-12.04.tar.bz2
  - sudo rm -rf /usr/local/phantomjs/bin/phantomjs
  - sudo mv phantomjs /usr/local/phantomjs/bin/phantomjs

And if you're really curious, here's the project on Travis CI.

Happy coding!


2016 web boilerplate

I was about to start a new side project, when I realized the crazy amount of setup I'd have to do to get up and running. I wanted to use my favorite latest things, but I had nowhere to start. So I started grabbing from prior projects and some open source libs, to create my latest starter framework. I'm optimistically calling it my 2016 web boilerplate.

It includes, in no specific order:

  • Webpack
  • JSCS
  • ESLint
  • Karma
  • Mocha, Chai, Sinon
  • PhantomJS 2
  • React
  • PostCSS
  • ES2015+ via Babel
  • Hot module reloading


Now get our there, and start building cool things! Happy new year!

Migrating a legacy node project to Babel

I’m working on a node project that has a mix of legacy and new code. The new code is written in ES2015 (a.k.a. ES6). The legacy code is your standard ES5, but with some unfortunate global variable leaks, oddly placed commas and semicolons, and remiss of unit tests.

We are using Babel’s require hook to transpile our code on the fly, using the ES2015 preset. This means it runs on both new and legacy code.

By default, one of the plugins that comprises the preset applies strict mode to every file. It’s for module loading, and it makes modules ES6 spec compliant. This causes trouble for our legacy code. It’s non-strict mode compliant (a.k.a. loose), which causes runtime exceptions.

So, we were at a crossroads. We needed to either refactor the legacy code, or attempt to disable strict mode. In the ideal world, we’d do the former. In reality, a colleague had already tried that, but had to abandon it due to time constraints, and a lack of confidence in introducing large changes to an untested code base. So I headed to Babel’s Slack room where I got the tip I was looking for: how to (temporarily, I swear!) disable strict mode on the module plugin.

The solution involved removing the ES2015 preset and manually including all the plugins. For the troublesome modules plugin, I was able to pass a parameter setting strict mode to false. My .babelrc became:

  "plugins": [
    ["transform-es2015-modules-commonjs", {"strict": false}]

(Note the last line, with the object in an array: ["transform-es2015-modules-commonjs", {"strict": false}])

And my package.json ballooned by about 20 lines:

  “dependencies”: {
    "babel-plugin-check-es2015-constants": "^6.2.0",
    "babel-plugin-transform-es2015-arrow-functions": "^6.1.18",
    "babel-plugin-transform-es2015-block-scoped-functions": "^6.1.18",
    "babel-plugin-transform-es2015-block-scoping": "^6.1.18",
    "babel-plugin-transform-es2015-classes": "^6.2.2",
    "babel-plugin-transform-es2015-computed-properties": "^6.1.18",
    "babel-plugin-transform-es2015-destructuring": "^6.1.18",
    "babel-plugin-transform-es2015-for-of": "^6.1.18",
    "babel-plugin-transform-es2015-function-name": "^6.1.18",
    "babel-plugin-transform-es2015-literals": "^6.1.18",
    "babel-plugin-transform-es2015-modules-commonjs": "^6.2.0",
    "babel-plugin-transform-es2015-object-super": "^6.1.18",
    "babel-plugin-transform-es2015-parameters": "^6.1.18",
    "babel-plugin-transform-es2015-shorthand-properties": "^6.1.18",
    "babel-plugin-transform-es2015-spread": "^6.1.18",
    "babel-plugin-transform-es2015-sticky-regex": "^6.1.18",
    "babel-plugin-transform-es2015-template-literals": "^6.1.18",
    "babel-plugin-transform-es2015-typeof-symbol": "^6.1.18",
    "babel-plugin-transform-es2015-unicode-regex": "^6.1.18",
    "babel-plugin-transform-regenerator": "^6.2.0",

Now my code’s running smoothly, and I have a migration path set for my code base. Hooray!

Victory dance!

Victory dance!