Building Gutenberg Blocks with Advanced Custom Fields (ACF) as a Stopgap

WordPress is a fantastic way to start learning to code. You rarely need to learn more than one new thing at a time in order to get building, and you can always find one new skill to learn (or improve) with each new project.

But all that changed when the fire nation attacked the block editor was introduced. Gutenberg raised the barrier to entry for programming with WordPress, and created a sizable speedbump for faithful WordPress developers that had never needed JavaScript before.

This doesn’t have to kill momentum, or stop new developers from using WordPress as their ‘way in.’ There’s a workaround for building custom blocks that requires minimal JavaScript. It isn’t necessarily a long term solution (though you can build very polished websites with it). But it is an effective stopgap to keep (or start) building websites while developing the skills necessary to start building native WordPress blocks.

Using WordPress as a ladder for improvement

Looking at WordPress as a framework, it’s easy to see the appeal of using it to learn web development. It’s extremely well-documented, surrounded by an active and supportive community, and written in a way that allows you to ‘wade in’ slowly to the code without any prior knowledge. Before Gutenberg, a new developer’s journey with WordPress might look something like this:

You can grow with a ‘learn one new thing per project’ strategy for your entire career without leaving the WordPress ecosystem.

After exploring how WordPress works as a CMS by configuring sites without code, they can slowly start introducing their own code. They can graduate from building sites using drag & drop themes/plugins to adding custom CSS rules via the customizer as they start to learn CSS.

From there, our new dev can learn about child themes and create their first stylesheet for more comprehensive style rules. Then, they can learn about WordPress’s template hierarchy and custom templates. As they do, they can dip into as much or as little HTML and PHP as they’re comfortable with, leaning on the framework as much as they need.

As their skills grow, they can begin building their own themes, and start adding custom functionality in `functions.php`. Once they can modify WordPress behavior in `functions.php`, it’s easy to start developing plugins: First, by moving that custom functionality from their theme into simple custom plugins to understand how plugins are structured, then by building larger more complex plugins, some of which can modify or expand WordPress’s core features.

At this point, our ‘new’ dev can drop the adjective. They have enough experience reading and writing code to make learning a language for a different framework easier. Their proficiency in HTML/CSS and PHP is also strong enough to experiment with lighter PHP frameworks like Laravel or Symfony. It also makes it possible to start learning more about routing and interacting with the database (which WordPress normally handles for us) – or – they can continue learning about routing and APIs etc by extending those features in WordPress via the WordPress REST API.

Because WordPress handles so much out of the box, you can isolate one language, grammar, skill or topic to learn or improve on at a time – whether that’s language-specific, or a higher-level concept, you rarely need to learn more than one new thing at a time in order to get building. You can grow with a ‘learn one new thing per project’ strategy for your entire career without leaving the WordPress ecosystem.

The challenge with Gutenberg

In the hypothetical roadmap above (it’s not so hypothetical, I’m on it 🙂 ), there’s one now-ubiquitous language that’s noticeably absent: JavaScript. Before WordPress 5.0, JavaScript was a nice-but-optional language for WordPress developers. You could go a long way on writing or modifying only a few lines of JavaScript per project, if you did so at all. You certainly didn’t need a build process or NPM to get by.

Gutenberg raised the barrier to entry for programming with WordPress, and created a sizable speedbump for faithful WordPress developers that had never needed JavaScript before.

However, when Gutenberg replaced the Classic Editor, developers had to worry about rendering content both in the template and in the editor via React components. Content is still stored the same way in the database, but instead of knowing exactly where and in what order content is rendered, developers  suddenly needed to worry about content appearing in any order. Crucially, they also needed to render them appropriately both on the frontend and in the editor.

If you’ve played with the WordPress block editor, you can consider the jump from Classic to Gutenberg by looking at the Classic Editor as a single unrendered block. Instead of having one ‘block’ being rendered only on the frontend with the Classic Editor, Gutenberg allowed multiple blocks to be rendered in any order, even nested, both on the front end and in the editor. There’s even a block that mimics the Classic Editor, to make it full circle.

This triggered a lot of panic, though in practice if you don’t need to make custom blocks, or only need to make simple blocks, it wouldn’t take long to learn enough JavaScript to get up to speed. Even if your block requires a little more than barebones, there are shortcut tools that can get you by while you learn the underlying code and tools. The problem is, without any significant JavaScript experience, the underlying code and tools can take a while to decrypt.

Furthermore, if you do need to make more involved blocks, or if you want to modify the built-in editor experience at all, then all of a sudden, you’d need HTML/CSS, some PHP, JavaScript, and to be familiar with how WordPress is structured as a framework, including how its REST API works! 

ACF as a Stopgap

If you weren’t sold on the Advanced Custom Fields (ACF) plugin before, it really shines in the transition period between the Classic Editor and Gutenberg. Before Gutenberg, ACF was used to augment the Classic Editor, and let WordPress content creators manage content on more robust pages. While the free version was exceptionally powerful, the pro license was generally considered to be one of the best (and even most necessary) investments for a serious WordPress site.

Building blocks using the ACF plugin allows newer developers to sidestep using JavaScript, and continue building sites learning one thing at a time. This method is also particularly useful for WordPress developers used to building sites and admin interfaces ‘the old way’ with just PHP/HTML/CSS, who haven’t had the chance yet to learn JavaScript, deeply™. There simply is no better way to make a Gutenberg block without JavaScript.

Limitations to using ACF

The most obvious drawbacks are:

  1. It introduces a dependency in every project
  2. That dependency isn’t free

To use ACF with Gutenberg blocks, you need a pro license, making it harder to get away with the free version post WordPress 5.0. If you’ve been building WordPress sites with the Classic Editor and ACF pro previously, neither of these are significant changes. Furthermore, given this is a pretty major shortcut to building custom blocks, the return on investment for the pro license is substantial – even more so than it was before.

Additionally, this method also fails to take full advantage of Gutenberg, and arguably offers a worse editor experience, since the ACF UI remains largely unchanged. Though this is less attractive for brand new builds, it can be an advantage when updating a site that already uses ACF to use Gutenberg, since content creators are already familiar with that editing experience and can transition more easily.

Repeater fields don’t fit in the block settings sidebar.

Perhaps the biggest limitation of using ACF to build custom blocks is, instead of editing content directly as-styled like a native block, you make edits in an input field on the side menu and watch the change happen in real-time in the rendered preview block. For simple or small fields, this is almost as good as editing the preview directly. If you’re using more complex ACF fields like repeaters or field groups, then cramming those fields into the side menu doesn’t quite work.

ACF solves this by letting you edit blocks by ‘flipping’ them over to expose the ACF editor. When doing this, you can’t see how those changes would be rendered in real time (you’d have to keep flipping the block over to make tweaks).

Flipping ACF Blocks gives you more room to edit content.

This in no way limits the ability to create polished, high quality websites; the published page looks the same either way. However, having native blocks that can be edited directly sandwiched between ACF blocks that can only be edited in the side menu (or flipped) creates a clunky editor experience. This might seem a small concession given the benefits, however it still underscores using ACF to circumvent JavaScript as a temporary measure on the way towards full Gutenberg adoption.

Will ACF drive mass developer adoption of the Block Editor?

JavaScript becoming a core part of the WordPress stack threw a wrench in things for a lot of established WordPress developers and made WordPress a little less accessible for new developers. Using ACF to build custom blocks can counteract that for a little while, buying time for working WordPress devs to learn JavaScript. Ultimately, embracing JavaScript is a good thing. Its rate of adoption has made it too big to fail, and it is the future of the web – at a minimum, it’s the intended future of WordPress. But until we can download languages instantly and directly into our brains, this is a very effective stopgap.

In the next post, we’ll look at a practical example of how to build a custom Gutenberg block with ACF, no JavaScript required.

Automating FTP Deployments with GitHub Actions

Deployment is the stage of the software development life cycle most likely to cause problems. Even if your deployment pipeline is perfectly set up, it’s the stage of the development process where any bugs you didn’t catch while building or during QA get shipped out to your end users. This might mean you need to roll back, or at the very least track down the problematic deploy at some point in the future.

Just looking for the workflow files instead of the explanation? Click here to skip to the final workflow files below.

Most modern deployment tools handle this by making deployments both automatic and repeatable, meaning that you can deploy with confidence and roll back when necessary (If your deployment processes aren’t like this right now, look for another post coming from us very soon!). However some of your clients may still depend on hosting infrastructure that requires deploying via FTP, which is much harder to automate and very difficult to roll back if needed.

Luckily, even if you are forced to use FTP as a deployment mechanism, GitHub Actions can help make this both an automatic and repeatable process to make deploys go much more smoothly. Let’s take a look at how this works.

Introducing GitHub Actions

GitHub Actions launched recently to make it “easy to automate all your software workflows”, which is exactly what we’re trying to do here. As long as your code is hosted in a GitHub repository, you can use GitHub Actions. Actions are YAML files called workflows that fire in response to certain events around a repository, such as a push to a particular branch, a pull request being merged, or a release being tagged. Actions are commonly used for running tests against pull requests to ensure they’re ready to be merged, or to run the build tooling for a particular codebase.

However, you’re not limited to these common use cases. There are pre-built GitHub actions to help you with most anything you can think of. In our example, we’ll put a couple of these pre-built actions together to create an automatic and repeatable FTP deployment pipeline.

What are we trying to do?

In our specific example, we had to run a couple build steps to generate the correct built files in our WordPress theme, and then deploy that built theme via FTP to either staging or production, based on whether the changes we had made since the last deploy were ready to be reviewed by our client or ready to be reviewed by the world.

Graphic showing our two workflows

We will differentiate between these two by creating two different workflow files: one for staging and one for production. We will trigger the staging workflow anytime code is pushed to the master branch (or a PR is merged to master) and we will trigger the production workflow by tagging a release. It’s much harder to tag a release accidentally than it is to accidentally push to master, so this makes sure that changes don’t go out to production until we’re absolutely sure they’re ready.

Setting up GitHub Actions

If you haven’t created any Actions in your repository yet, the easiest way to get started is to click on the Actions tab in the repository and you’ll be presented with the GitHub Actions splash screen. GitHub will suggest a couple common actions based on the language it detects your repository uses primarily, but since we’re doing something just a bit out of the box, click on Skip this and set up a workflow yourself.

Once you click on that, you’ll be taken into the main GitHub Actions interface where you can start to edit the YAML file that will make up your first workflow. To keep our workflows separate, we’ll name our first one staging.yml just so we can keep track of where we’re deploying. To make sure we don’t lose our work, go ahead and commit this initial file by clicking on the big green Start commit button in the top right corner.

Now that you’ve got a workflow committed, if you have the repository cloned down locally, you can edit this workflow inside your favorite text editor by navigating to the .github/workflows directory that’s inside the repository root. If you’d rather keep editing inside the GitHub GUI, that’s fine too.

Editing a workflow in Sublime Text

Creating our first Action

Now that we have our staging.yml file set up, let’s clear it out so we can start filling it with our own workflow steps. Keep in mind that because this is YAML, spacing and indentation matters, so if you start getting any weird errors, check to make sure that you’re either using tabs or spaces (we’re not going to start that flamewar here) across the entire file and that everything is indented correctly. Let’s break the pieces of this workflow file down step by step.

First we want to give our workflow a name. Anything is fine, but something like Staging Deploy probably makes the most sense. This is important so that if you see that a workflow has failed, you can easily know whether it was a staging deployment or a production deployment. After that, we need to tell GitHub when it should trigger our workflow. In this case, we want our staging deployment to trigger any time new code is pushed to master.

name: Staging Deploy on: push: branches: [ master ]

Side note: Any pull requests merged to master technically count as code getting pushed to master, so this workflow will also trigger whenever a pull request is merged to the master branch.

Next, we need to give our Action some information about the environment we want it to run in and any other default variables we want it to use. We default to using ubuntu-latest for the runtime environment and bash as the default shell. These defaults work well for most workflows and should only really be changed if you know what you’re doing. In addition to those, we will set the working-directory variable so that we don’t have to specify this directory on each step of our workflow.

name: Staging Deploy on: push: branches: [ master ] jobs: build: runs-on: ubuntu-latest defaults: run: shell: bash working-directory: ./themes/our-awesome-theme

Side note: Workflow steps run in the root of the repo by default. We only specify our working directory because the root of our GitHub repo is not where we want our action to run.

Adding some Workflow steps

Now that we’ve got our workflow configured and the workflow environment specified, we can start adding steps. The first step we’re going to take is to checkout the actual git repo into the environment we just set up inside the Action. We can use a pre-built Action for this. That’s what actions/checkout@v2 is specifying here.

name: Staging Deploy on: push: branches: [ master ] jobs: build: runs-on: ubuntu-latest defaults: run: shell: bash working-directory: ./themes/our-awesome-theme steps: - uses: actions/checkout@v2 with: fetch-depth: 2

After that, we can continue adding steps. If our steps are just shell commands, they can be relatively simple. They need a name and then the command to be run.

name: Staging Deploy on: push: branches: [ master ] jobs: build: runs-on: ubuntu-latest defaults: run: shell: bash working-directory: ./themes/our-awesome-theme steps: - uses: actions/checkout@v2 with: fetch-depth: 2 - name: Install Composer Dependencies run: composer install --prefer-dist --no-progress --no-suggest - name: Install NPM packages run: npm install - name: Generate bundled theme with all assets run: npm run bundle

Now that our ready-built theme has been generated, the last step is to actually run the FTP deploy. If you’re not well-versed in running an FTP deploy in an environment like this (and don’t worry, neither am I), this is where we again leverage a pre-built Action to help us. In our case, we used SamKirkland’s FTP-Deploy-Action. To use this Action, we have to specify it in the workflow file and provide it with all the necessary parameters. You can read more about specific parameters that might apply to your use case in the linked documentation, but here’s how it looked for us.

name: Staging Deploy on: push: branches: [ master ] jobs: build: runs-on: ubuntu-latest defaults: run: shell: bash working-directory: ./themes/our-awesome-theme steps: - uses: actions/checkout@v2 with: fetch-depth: 2 - name: Install Composer Dependencies run: composer install --prefer-dist --no-progress --no-suggest - name: npm install run: npm install - name: npm run bundle run: npm run bundle - name: FTP Deploy to WP Engine (Staging) uses: SamKirkland/FTP-Deploy-Action@3.1.1 with: ftp-server: sftp://stagingawesometheme.sftp.wpengine.com:2222/ ftp-username: staginguser-deploy ftp-password: ${{ secrets.STAGING_FTP_PASSWORD }} local-dir: ./our-awesome-theme/

Looks great, right? You might notice that we don’t include our ftp-password in this workflow file. Because these files are committed directly to the repo and storing credentials or other sensitive information in version control is never a good idea, we have to use another GitHub tool called Secrets.

Using Secrets to keep credentials, well…secret

If you click on the Settings tab on your GitHub repo, you’ll see a few different options in the left hand navigation, one of which is Secrets.

GitHub Secrets Screen

By clicking on Add Secret you can specify a name for the secret (PROD_FTP_PASSWORD or STAGING_FTP_PASSWORD in our screenshot above) and then actually paste in your password, which GitHub will store securely. As you can see from the interface above, after that, the password is not accessible. It can be updated or removed, but not accessed.

Now that we have our credentials securely stored you can reference them in your workflow file using the format ${{secrets.STAGING_FTP_PASSWORD}}, substituting STAGING_FTP_PASSWORD for whatever you named your secret. If you have even more strict security restrictions, your FTP URL and FTP Username could be stored in Secrets and referenced in the same way as well.

Testing our Staging Deployment Workflow

Now that we have our workflow set up (make sure it’s committed!), it’s time to test. Since we configured this workflow to fire on a push to the master branch, that’s what we need to do to test that everything is working as expected. An easy change to test is to push an HTML comment in a particular place where you know it will get output on the frontend of your staging site.

Once you do that, you can go over to the Actions tab in your repo and see your first Action running. If you click into the Action itself, you will be able to see each of the individual steps running and whether they passed or failed. Once all the steps have passed, you should see the elusive green checkmark.

A completed and passing pipeline

Then it’s time to head over to your staging site and check whether you see the HTML comment you inserted into the source code to test your deployment pipeline.

Congratulations! You’ve just created your first workflow and you’ll never have to hear the question “Is that change deployed to staging?” ever again! Anyone with the proper permissions can go into the Actions tab and see when a particular commit triggered a run of the workflow (staging deploy).

Creating the Production Deployment Action

To create our Production Deployment workflow, copy our staging.yml file into a file in the same directory called production.yml. Our production deployment action will be the exact same as our Staging Deployment action with three notable exceptions.

First, we want to change the name at the very top of the workflow from Staging Deployment to Production Deployment. This ensures that when we’re looking at runs of the workflow inside the GitHub Actions interface, we’ll be able to tell them apart.

Second, we want to change the conditions on which this workflow triggers. Instead of triggering on a push to the master branch like our staging workflow, we only want to trigger this workflow when someone tags a release. This means we need to modify the top part of our workflow as follows:

name: Production Deploy on: release: types: [published]

Finally, our FTP credentials will be different. If you used Secrets to keep your credentials secure (please tell me you did), you need to add your production credentials to Secrets as well and then update them within the workflow file itself. This means that our FTP step will look something like this:

- name: FTP Deploy to WP Engine (Production) uses: SamKirkland/FTP-Deploy-Action@3.1.1 with: ftp-server: sftp://prod440.sftp.wpengine.com:2222/ ftp-username: produser-deploy ftp-password: ${{ secrets.PROD_FTP_PASSWORD }} local-dir: ./our-awesome-theme/

Testing our Production Deployment Workflow

Since we still have our test HTML comment committed from testing our Staging Deployment workflow, we can test this relatively easily. Once the new Production Deployment workflow is committed, you can go to the Releases area of your repo and click on Draft a new release. Filling out the necessary fields will tag a new release and once that’s completed you should see that your new workflow has been triggered under the Actions tab of your repo.

Drafting a new release

Once all the steps have passed, you should see the elusive green checkmark. Then it’s time to head over to your production site and check whether you see the HTML comment you inserted into the source code to test your deployment pipeline.

If so, you’ve now got a fully automated, multi-environment deployment pipeline. Your DevOps certification on LinkedIn won’t be far away!

Wrapping Up

Even when you have to deploy over FTP, that doesn’t mean you can’t have automated, repeatable deploys. Setting up a deployment pipeline like this helps use modern development practices like version control, testing and other automation while still fitting your workflow into your client’s infrastructure requirements. If you’re looking for even more inspiration, check out the Marketplace where you can find all the pre-built actions that people have created.

How can we help?

Looking to automate your developer workflow and make your developers more productive? Reach out to hello@alphaparticle.com or use the form below and let’s talk about how we can help.

Example Files

name: Staging Deploy on: push: branches: [ master ] jobs: build: runs-on: ubuntu-latest defaults: run: shell: bash working-directory: ./themes/our-awesome-theme steps: - uses: actions/checkout@v2 with: fetch-depth: 2 - name: Install Composer Dependencies run: composer install --prefer-dist --no-progress --no-suggest - name: npm install run: npm install - name: npm run bundle run: npm run bundle - name: FTP Deploy to WP Engine (Staging) uses: SamKirkland/FTP-Deploy-Action@3.1.1 with: ftp-server: sftp://stagingawesometheme.sftp.wpengine.com:2222/ ftp-username: staginguser-deploy ftp-password: ${{ secrets.STAGING_FTP_PASSWORD }} local-dir: ./our-awesome-theme/
name: Production Deploy on: release: types: [published] jobs: build: runs-on: ubuntu-latest defaults: run: shell: bash working-directory: ./themes/our-awesome-theme steps: - uses: actions/checkout@v2 with: fetch-depth: 2 - name: Install Composer Dependencies run: composer install --prefer-dist --no-progress --no-suggest - name: npm install run: npm install - name: npm run bundle run: npm run bundle - name: FTP Deploy to WP Engine (Production) uses: SamKirkland/FTP-Deploy-Action@3.1.1 with: ftp-server: sftp://prod.sftp.wpengine.com:2222/ ftp-username: produser-deploy ftp-password: ${{ secrets.PROD_FTP_PASSWORD }} local-dir: ./our-awesome-theme/

Custom Block Icons with ACF Blocks

Creating custom Gutenberg/Block Editor blocks with ACF is a great way to provide WordPress users with the functionality of custom blocks without having to be familiar with Javascript and React. If you haven’t tried this workflow before, the experience is largely similar to how ACF was used before the advent of the Block Editor. There is the extra step of having to register the block (using PHP, still no Javascript necessary!) to make sure WordPress knows you’re trying to build a new block. But after that’s done, you add custom fields to it through the same ACF interface you’re already familiar with, and build the template for controlling how the block displays with get_field and the_field calls just the same as you always have. The code to register a block looks something like this:

acf_register_block_type( array( 'name' => 'image-with-text', 'title' => __('Image with Text'), 'description' => __('A custom block for Image with Text.'), 'render_template' => 'template-parts/blocks/image-with-text.php', 'category' => 'ap-blocks', 'keywords' => array( 'text, image, image with' ), ));

And with that, we’ve registered a new “Image with Text” block that will now be available inside the Block Editor. However, for adding that last extra bit of polish, let’s take a look at how we can add a custom icon to our new block.

Custom Icons in ACF Blocks

In our register block code, we can specify an icon parameter, which tells ACF that we want a custom icon. We can either specify a string for a Dashicon (an icon set included with WordPress):

acf_register_block_type( array( 'name' => 'image-with-text', 'title' => __('Image with Text'), 'description' => __('A custom block for Image with Text.'), 'render_template' => 'template-parts/blocks/image-with-text.php', 'icon' => 'book', 'category' => 'ap-blocks', 'keywords' => array( 'text, image, image with' ), ));

You can even include a custom SVG if you have one you’d rather use:

acf_register_block_type( array( 'name' => 'image-with-text', 'title' => __('Image with Text'), 'description' => __('A custom block for Image with Text.'), 'render_template' => 'template-parts/blocks/image-with-text.php', 'icon' => '<svg viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><path fill="none" d="M0 0h24v24H0V0z" /><path d="M19 13H5v-2h14v2z" /></svg>', 'category' => 'ap-blocks', 'keywords' => array( 'text, image, image with' ), ));

However, this method means that you have to paste the hardcoded SVG string into every block you want to register with that icon. If you’re registering many blocks with the same icon, and decide that icon needs to change in the future, you have to replace that hard-coded SVG string on every block. Luckily, there’s a way to make using the same icon for multiple blocks much more efficient.

Use an SVG file instead of a hard-coded string

If there’s an SVG file in your theme that you want to use for your custom blocks, you can use file_get_contents to get the SVG code out of the file and into your register block call without hard-coding the SVG string. Here’s what that looks like:

acf_register_block_type( array( 'name' => 'image-with-text', 'title' => __('Image with Text'), 'description' => __('A custom block for Image with Text.'), 'render_template' => 'template-parts/blocks/image-with-text.php', 'icon' => file_get_contents( get_template_directory() . '/images/svgs/image-with-text.svg' ), 'category' => 'ap-blocks', 'keywords' => array( 'text, image, image with' ), ));

You’ll need to update the file path to reflect where the image actually exists in your theme, but if you ever wanted to update the icon, you could do so just by updating that one SVG file and all the blocks that reference that file will now be updated as well.

Heads up!

Loading SVG code like this can potentially be a security concern (similar to why the WP Media Library doesn’t support SVGs by default) so make sure any SVGs you have included this method have been inspected to make sure they’re safe and contain only the code needed to render the image.

That’s it!

You know should be able to use your custom SVGs while keeping your register block calls nice and DRY.

Need more help?

If you’re taking on new Block Editor project or exploring how to transition your site to the new Block Editor, reach out to hello@alphaparticle.com and let’s see how we can help.

Getting Started with Gutenberg Ramp

You may have heard that WordPress has a brand new block editor that completely transforms the writing and content management experience. However, there are plenty of sites that aren’t ready to move their entire base of content over to the new editor. This could be because not all of their plugins are compatible with the new editor or their editorial team isn’t ready to have a totally new editing experience.

(If this is you, we can help! We’re only an email away at hello@alphaparticle.com.)

Luckily, there is a plugin that can help you move your content gradually into the new editor: Gutenberg Ramp. This plugin hasn’t been written about a ton, so when I mentioned it in a conference talk back in 2019, there were some requests for a more in-depth demo.

If you’re more of a video person, I’ve walked through the same use cases and demos that I’ll detail in this post as a Youtube video:

Downloading and installing the plugin

Gutenberg Ramp can be downloaded and installed just like any other plugin. Either search for it in the Plugins section of wp-admin, or download the ZIP file and place it in the plugins directory of your WordPress installation. Once you’ve chosen one of these methods, go into wp-admin and activate the plugin.

Gutenberg Ramp settings

At this point, you should notice that all your content is now editable using the Classic Editor. You’ll also notice that you have a section labeled “Gutenberg Ramp” under the Settings > Writing menu and that’s where we’re going to start.

Enable for an entire post type

A popular use case for Gutenberg Ramp is to enable the new editor on pages to start to deliver a rich editing experience for those updating page content, while letting everyone else writing posts to have the same experience they’re used to. Others have a custom post type that’s published less frequently but could really benefit from blocks and want to learn the ropes of the new editor using that post type.

As we saw in the screenshot above, the Gutenberg Ramp plugin makes both these workflows very simple. Go into Settings > Writing and check the box for the post type you want to enable the new editor on. Make sure to Save Changes, and that’s it! All posts of any of the post types you checked will now use the new block editor, while leaving the rest of your content to use the Classic Editor experience.

Enable for an entire post type (using code)

Maybe (hopefully!) your codebase is under version control and you would like to keep as many of your configuration changes stored in code as possible. Gutenberg Ramp supports this workflow as well. In functions.php (or wherever you keep your modifications and additions to WordPress hooks and filters), you can use the gutenberg_ramp_load_gutenberg function to specify rules for which content is editable with the new editor. For example, if we wanted to edit all content under the Page post type with the new editor, while leaving everything else alone, we could use the code sample below:

gutenberg_ramp_load_gutenberg( [ 'post_types' => [ 'page' ], ] );

This has the same effect as clicking the Page checkbox under Settings > Writing, but allows us to keep this configuration change in code. If you go back to Settings > Writing you will see that the checkbox for the post type that you are controlling through code is now grayed out and cannot be checked.

Disabled Gutenberg Ramp Interface

If you click on the “Why is this disabled?” link, it takes you to documentation that explains:

“If you’re seeing something greyed out, it means the gutenberg_ramp_load_gutenberg() function is already in your theme functions.php. If you want to use the wp-admin UI, remove the conflicting function from your functions.php file.”

This makes sense. Your settings should only be managed in one place to avoid conflicts and if you’re managing them in code, you probably won’t be looking at this admin interface anyway.

Enable for certain posts

If you’re looking to transition to Gutenberg for only certain pieces of content, then the post type option detailed above is still a little too broaad for you. Luckily, Gutenberg Ramp does allow your to specify post IDs that should be editable using the new editor while leaving the remainder of your content untouched. The code looks very similar to how we specify post types:

gutenberg_ramp_load_gutenberg( [ 'post_ids' => [ 182, 184, 192 ], ] );

A caveat to watch out for: if you’re using multiple environments (and you should be!) content might not have consistent post IDs across these environments, depending on how they’re set up. For example, if you’re trying to enable the block editor on the About page of your site, it might be post ID 4 in production, but post ID 7 on Staging and maybe 23 on your local. In this case, enabling hard-coded post IDs may have unexpected results.

Specifying specific post IDs can also be combined with enabling entire post types. For example, if we wanted to use the new block editor on all the post types we specified above, but ALSO for all pages, you would use the following snippet:

gutenberg_ramp_load_gutenberg( [ 'post_types' => [ 'page' ], 'post_ids' => [ 182, 184, 192 ], ] );

As you can see, you can go pretty far with the Gutenberg Ramp plugin, but to get around the example with the About page that wee talked about above, you need to go a bit further.

Completely custom criteria with the use_block_editor_for_post filter

If you need to get even more custom with your criteria for enabling the new editor, you can tap into a WordPress filter called use_block_editor_for_post. This filter passes in the current post so it can be evaluated and expects a boolean return value (true or false) as to whether that post should use the block editor. This gives you ultimate flexibility because you can find out anything and everything about the post before you decide whether you should enable the block editor. You can query for post meta, you can look at post dates, and much more. Keep in mind, however, that writing a slow meta query will definitely impact wp-admin performance, so don’t go too crazy.

However, if you looked at your content and decided to pick a point where all your content going forward should use the new block editor, you can do that with this filter. Find the post ID where you want your cut off point to be and use a snippet like:

function enable_gutenberg_for_select_posts( $can_edit, $post ) { return $post->ID >= 106; } add_filter( 'use_block_editor_for_post', 'enable_gutenberg_for_select_posts', 10, 2 );

Since post IDs are incremented with each new post, anything posted after post ID 106 will now use the new block editor.

You could use this filter to enable the new editor on posts by title (which would clear up the confusion of the About page in our earlier example) as well as any other custom criteria, which I’ll leave as an exercise for the reader. However, let’s look at one final example: enabling the block editor for posts that are using a certain template.

Enabling the block editor for a specific template

Another common pattern when upgrading a site to the new Block Editor is building out a new template specifically for this new content. We can use the same use_block_editor_for_post filter to enable the new editor for any posts that are using that new template.

First, create your new template file in your theme. Make sure it has a template name (usually left as a php comment at the very top of the template file:

<?php /* Template Name: Gutenberg Template */

Once you have this template set up, you should see it as an option in the template dropdown on the Edit Post screen. You can then select this template for any posts that you want to use the new template. Make sure to click “Update” to save your changes!

Finally, we need to use our filter again to let WordPress know which posts should have the block editor enabled.

function enable_gutenberg_on_gutenberg_template( $can_edit, $post ) { return 'template-gutenberg.php' === get_page_template_slug( $post->ID ); } add_filter( 'use_block_editor_for_post', 'enable_gutenberg_on_gutenberg_template', 10, 2 );

Swap out template-gutenberg.php for whatever the filename of your new template is. You’ll now notice that whenever you select that template from the template dropdown for a given post, that post switches to using the Block Editor! Note: you may need to refresh after clicking Update to see your changes take effect.

The new Block Editor is here to stay

Whether you’re a fan of the new editor or not, there’s no doubt that it is the way forward for WordPress. If you’re not ready to jump in with both feet, the Gutenberg Ramp plugin can help you step in one toe at a time.

If you’re looking for help making this crucial transition into the modern era of WordPress, check out my talk Helping your Team Transition to Gutenberg on WordPress.tv, or send us an email at hello@alphaparticle.com and let’s talk about how we can help.

Helping Your Team Transition to Gutenberg at WordCamp for Publishers 2019

Supporting Time Ranges for Manually Built Nova Value Metrics in Laravel

tl;dr

Manually built metrics in Nova don’t support the time-range dropdown that the Nova helper functions support, and the Nova helper functions don’t handle many<->many relationships well (at least not the way I needed).

To get around this, you can use two protected functions from the parent Value class: currentRange() and previousRange(). Just don’t forget to pass in the current admin’s timezone!

Just looking for a code snippet? Jump to the end.

Quick Links

Background

Metric cards in Nova can be produced rapidly, are straightforward to plan and explain, and provide high-impact ‘quick wins’ when using Nova to build out a dashboard for a Laravel application.

Class Structure

Value metrics generally consist of four methods: calculate()ranges()cacheFor(), and uriKey(). We’re only concerned with the first two.

The ranges method ultimately populates the dropdown on the frontend. It returns an array of time ranges your metric will support, and is pre-populated by Artisan. If you don’t want to support ranges, you can remove this method; it’s actually optional, and you can add/remove ranges from the returned array as you see fit.

The Calculate Method

Metrics are centered around that calculate method, which can frequently be a one-liner. The parent class for ranged metrics includes methods for the most frequent queries you’d want to make (count, sum, max, min, and average), so as long as you’re creating a metric for an eloquent model — say, users — Nova (and Artisan) will do most of the work for you. The calculate method for metric measuring how your app is growing might look like this:

use App\User; public function calculate(NovaRequest $request) { return $this->count( $request, User::class ); }

and would return the number of users your app has acquired (over a given range), along with a percent increase or decrease compared to the the previous period. You can also further specify your query for users matching a particular set of rules (for example, if your users had an account_status, you could alter the return statement like so:

public function calculate(NovaRequest $request) { $this->count( $request, User::where('account_status', 'active')) }

The metric looks pretty good on the frontend out of the box, too:

New User Metric Screenshot

Note the range dropdown in the upper right; this is where the ranges() method comes in — you can choose what options appear in that dropdown by setting them in that method (or, if you’re happy with the default options that are pre-populated, don’t worry about it!). The actual implementation of this feature seems to be Nova magic.

This is great for simple metrics like counting how many active users there are in your application, or counting the number of posts that were published, but what if you want to report on a metric that isn’t covered by Nova helper functions?

Manually Building Results Values

Manually building results is equally straightforward at first glance. Nova metrics support manually building result values. It even supports including reporting previous result values, so long as you calculate them yourself:

public function calculate(NovaRequest $request) { $result = //some query $previous = //another query (optional) return $this->result($result)->previous($previous); }

This works well for reporting a value in one time range, but when you build your results manually, you lose the ability to dynamically compare the metric across different time ranges (see the dropdown in the upper right of the screenshot above). What if you need that dropdown?

Problem

We needed to return a count of the records in the pivot table (in this case, we have a badges table and users , and need to report the total number of badges earned (by all users, over a time range).

Building that result manually is easy enough: We can use Laravel’s database facade to count the records in the user_badges pivot table:

$result = DB::table('user_badges') ->count();

We can even compare to a previous value if we calculate it ourselves, but it won’t connect to the ranges() method, so this only works if we hardcode a fixed time range. What about that dropdown?

Total Badges Earned Hardcoded Time Range

I was unable to find anything in the documentation on how to handle ranges if building the result values manually, especially to support the dropdown that seems to be Nova magic for straightforward metrics. Fortunately, we can look at how Nova’s metric helper functions are written for ideas. There is an answer in the source code!

Looking Under the Hood: How Nova implements Metrics classes

The metrics classes we generate with Artisan extend the abstract class Value. This class contains the helper methods you use for simple metrics. There isn’t a whole lot happening in these helpers, however. They’re all one-liners that call a protected method. It’s that protected method, aggregate(), that’s of interest to us:

protected function aggregate($request, $model, $function, $column = null, $dateColumn = null) { $query = $model instanceof Builder ? $model : (new $model)->newQuery(); $column = $column ?? $query->getModel()->getQualifiedKeyName(); $timezone = Nova::resolveUserTimezone($request) ?? $request->timezone; $previousValue = round(with(clone $query)->whereBetween( $dateColumn ?? $query->getModel()->getCreatedAtColumn(), $this->previousRange($request->range, $timezone) )->{$function}($column), $this->precision); return $this->result( round(with(clone $query)->whereBetween( $dateColumn ?? $query->getModel()->getCreatedAtColumn(), $this->currentRange($request->range, $timezone) )->{$function}($column), $this->precision) )->previous($previousValue); }

This method may look like a lot, but at a bird’s eye view, it’s doing something that’s already documented, both in this post and in Nova’s official docs – it’s manually building a result and a previous value, and returning them! To do this, it’s using two more protected helper methods: currentRange() and previousRange(). So when we manually build results in our metrics class, we’re overriding these!

It follows that we can do the same in our class to support time ranges. However, that method takes two inputs, which we must remember to pass in ourselves: the timezone of the current user (conveniently calculated in the third line of the aggregate method above) and the time range (which even more conveniently is passed in as part of the request).

So, the strategy is to use these two helper functions to help manually build our result.

My Final Calculate Function

public function calculate(NovaRequest $request) { $timezone = Nova::resolveUserTimezone($request) ?? $request->timezone; $result = DB::table('user_badges') ->whereBetween( 'created_at', $this->currentRange($request->range, $timezone) ) ->count(); $previous = DB::table('user_badges') ->whereBetween( 'created_at', $this->previousRange($request->range, $timezone) ) ->count(); return $this->result($result)->previous($previous); }

This approach, in combination with the built in ranges() method, successfully counts the number of records in the pivot table that were created within the time range selected on the front end.

Total Badges Earned Complete

Write and publish faster with Block Editor keyboard shortcuts

I’ve been using the Block Editor since it was released in WordPress 5.0, but as I’ve been creating and updating more content, I’ve found one strange user interaction that I didn’t know how to get around. When I move my mouse up to the top right of the screen, the Update/Publish button is very close to the Admin Bar, which has a hover state. For example, if I move my mouse just a couple pixels too far, I get the hover state for the logged in user rather than being able to click the Update button.

Accidentally hovering over the admin bar instead of clicking the Update button

Sometimes, if I click too quickly, the browser even starts to navigate away from the editor with potentially unsaved changes!

As I looked into this, I found an interesting feature of the Block Editor that I didn’t know existed: keyboard shortcuts. Using keyboard shortcuts makes the block editor feel much more like Google Docs or even a desktop application like Microsoft Word.

Block Editor keyboard shortcuts panel

For example, instead of having to take my hands off the keyboard and use the mouse to click the Update button, I can use Cmd + S to save the post that I’m working on, just like I would in any desktop application. This means I can save my work more often without interrupting my flow while I’m writing.

The time savings and keyboard shortcuts don’t stop there, however. If you want to be a real power user, you can use Ctrl + Option + H (on a Mac) or Shift + Alt + H (on Windows) to bring up a panel that will show you all the keyboard shortcuts that the editor supports.

Quicker Block Inserts

One of the biggest keyboard shortcuts that hasn’t gotten much attention is the block inserter shortcut. From within the editor (but not from within a block) you can press the / key and just start typing the name of the block you’re trying to insert. This allows you to insert a new block and continue creating content without leaving the keyboard.

What’s Next?

There’s been some discussion around allowing users (and potentially custom blocks) to register their own keyboard shortcuts. However, that would come with a huge possibility of different blocks trying to register the same keyboard shortcut along with a host of other potential issues. For the time being, it looks like users will have to stick with the shortcuts supported out of the box.

Lucky for us, the keyboard shortcuts that do ship natively with the Block Editor are quite comprehensive, and powerful. Mastering them will help you complete common actions faster and will allow you to use the new editor to its full potential.

The Alpha Particle Perspective: Alexa Healthcare Skills

Amazon announced last month that Alexa will now select partners in the healthcare industry to build skills that transmit protected health information in what they’re calling a new “HIPPA-eligible environment”.

The skills that are launching as part of the new program allow users to:

  • Check and request notifications regarding the status of prescription delivery
  • Find an urgent care center and schedule a same day appointment
  • Update child care teams on progress of post-surgery recovery and receive additional appointment information
  • Ask for their latest blood sugar reading, learn about their personal trends and receive insights about this data
  • Manage health improvement goals

Despite some of these skills being potentially complex, some are a great fit for a voice interface.

What makes a good voice interface?

Navigating automated voice menus can be more frustrating than necessary. It always feels as though by the time you’re asked to “Press 9 for…” you’ve already forgotten what you were trying to accomplish in the first place. Voice interfaces are different as they function more like a conversation, with one clear choice or action to take at a time. This is also a departure from web or mobile design, where users handle multiple options in front of them on a screen at the same time.

Voice has some clear advantages, given that is can ask a user one question and provide one answer. It is also more accessible to many users, especially those that are visually impaired. However, for a skill to realize these advantages, some design conventions need to be followed.

For a voice interface to be good, it has to have a clear, established flow from beginning to end. If one gets lost during the flow on a web interface, it’s easier for a user to click around and start over. But on a voice interface, its functionality and user-friendliness is highly dependent on the flow being one that is universally easy to follow.

While a voice interface has its benefits, it’s still quite difficult to obtain accurate complex information through it, like usernames, passwords, locations, menu selections, or others. Many skills compensate for this difficulty by having the user enter the required information online, and then linking the user’s account on their service through a device.

To cite an example, the Lyft skill allows users to book a Lyft from either home or work addresses previously saved to the app. The Dominos skill saves favorite and recent orders, which allows you to quickly re-order the same items without having to enter each one individually. With that in mind, let’s take a look at one of these skills that fits this criteria.

Swedish Health Connect Skill

The Swedish Health Connect Skill equips their registered users with the ability to link their accounts to Alexa. They can then schedule appointments according to their own convenience. As they describe it:

Get started with booking same-day or next-day appointments by saying “Alexa, open Swedish Health Connect”. Alexa will suggest the next available appointments at a Swedish Express Care Clinic near your home. Need to be more specific? Just say, “Alexa, ask Swedish Health Connect to schedule an appointment tomorrow at 8 am.” Or open the skill and say “Schedule an appointment today after 6 pm.”

Swedish Health Connect

This provides the user with a clear path through the skill to accomplish their desired functionality. When the skill opens, it prompts the user right away with suggestions for a nearby facility, based on information it already knows about the user’s location. As we noted above, the pathway through the skill is made easier by the fact that the user already entered this information online and doesn’t have to be prompted for it.

The user is also presented with a few different options, all of which are related to the action of scheduling an appointment, with variations on times and certain specificities. The skill has a clear purpose, with clear call to actions throughout, and without trying to do too much.

How can voice interface change your business?

Interested in exploring a voice interface like this for your business? Reach out using the contact form below or email us directly at hello@alphaparticle.com, and let’s discuss how we can help.

WordPress 5.0 and Beyond

The Alpha Particle team just got back from WordCamp US, the annual conference centered around the WordPress project. The topic on everyone’s mind was, not surprisingly, Gutenberg and how the new editor experience that was released in WordPress 5.0 will shape the WordPress ecosystem going forward.

The WordPress Community

The release process for WordPress 5.0 was one of the most controversial in recent memory, with many in the community upset with the lack of transparency from the release leads. Prominent community members took issue with the timing of release, the lack of clearly stated priorities, as the dichotomy between being told their contributions were important but ultimately seeing them ignored or pushed to a future release. This was definitely a shift from past releases and the general mood seemed to be a much more contained enthusiasm than I had seen at previous events.

At the heart of WordPress is its community and alienating that community will limit WordPress in many ways. To that end, the community is stepping up and offering alternative suggestions for how the WordPress project is goverened, which will ensure that the community is more involved in the process of moving WordPress forward.

For more information on this, check out The WordPress Governance Project(link here).

Gutenberg

Now that 5.0 has been released, the Gutenberg editor is now the default editor in WordPress. If you want your editing experience to stay the same, you can install the Classic Editor plugin, but many people across the ecosystem are now experiencing and using a new editing experience in WordPress.

Any new initiative like this will have bugs, no matter how rigorusoly (sp?) it is tested, and Gutenberg is no exception. However, as Matt Mullenweg highlighted in his “State of the Word”, many people are enjoying using Gutenberg.

This is certainly a shift for the community and is being felt in a few different areas:

Developer Experience

Whereas the lingua franca of WordPress has always been PHP, Gutenberg brings Javascript to the forefront of the developer experience in WordPress. This brings along with it concepts like NPM, babel, and a whole host of other concepts, which can be daunting for developers who had a grasp on PHP, but have mostly been absent from the rapid development of front end technologies in recent years.

While Gutenberg still supports metaboxes and the old interfaces, it’s clear that blocks are the new first-class citizens in the WordPress admin screen and for developers to provide the best possible experiences for their clients, they will need to learn how to develop blocks with Javascript.

Plugin Authors

Similar to theme developers, authors of the 40,000+ plugins in the WordPress repository will need to evaluate how their plugins work and determine what functionality, if any, needs to be upgraded to use blocks. For this reason, plugin developers were some of the most outspoken about the vague release date of WordPress 5.0.

Some plugins are already Gutenberg-ready and fully support the block concept, but most are catching up to this new release. Plugins that modified the old editor have an even more complicated choice to make, because while they may want to update to be Gutenberg-ready, many of their users will likely be using the Classic Editor plugin, which will need to be supported as well.

Support

Both the WordPress Support Forum staff and theme and plugin authors were busy before the release making plans for how to handle the support load of this new release. When any interface changes as significantly as the editor did in 5.0, there is inevitably an increase in questions and support needs from everyday users.

So far, anecdotal evidence points to this being a non-issue, but as the use of Gutenberg moves from mostly early-adopters and people comfortable with new interfaces to the general WordPress population, this could become an issue.

The Future of WordPress

Now that 5.0 has shipped, Matt Mullenweg used much of his “State of the Word” address to discuss the future of the WordPress project.

Notably, he highlighted that the minimum version of PHP needed to run WordPress will be adjusted in the near future. This is something developers and platform advocates have been requesting for a long time, and ensure the ecosystem will run on the most secure and performant infrastructure possible.

In addition, he discussed how he sees the concept of blocks taking over the entire WordPress admin experience, including things like menus and widgets.

Looking Forward

It’s an exciting and uncertain time in the WordPress ecosystem, as the project is going through systemic change, both on the governance side as well as the technology underlying the project.

But at it’s core, WordPress is all about the community. This is a community that Alpha Particle has been a part of since the company’s inception and one we will continue to support. This sense of community was extremely evident at WordCamp’s contributor day, when 100+ people came together to contribute to the WordPress project. This group provided translations, worked on new code audit tools, imprpoved the mobile experience, and much more.

This community is what will truly keep WordPress moving forward and we’re excited to see what new things we can build.

The Alpha Particle Perspective: “Marketing in the Age of Alexa”

In a recent issue for the Harvard Business Review, Niraj Dawar covered the challenges and opportunities that exist as we move into an era of voice-driven technology. Here at Alpha Particle, this is a concept we’ve been thinking and speaking about recently. As such, we wanted to take an opportunity to respond to this article. Feel free to read the full article on HBR first or skip straight to our takeaways below.

Voice is a growing market

Amazon has sold ~25 million smart speaker units and is expected to double that number by 2020. Google Assistant is available on ~400 million devices (Google Home smart speakers as well as select Android phones). These numbers show that even though voice is a relatively new interface, consumers are adopting it at a rapid rate.  We have seen users who struggle to use traditional text or cursor-based navigational interfaces embrace voice, and this leads us to believe this adoption will only continue to grow.

It’s too early to determine which platform (Alexa, Google Assistant, Siri, Cortana) will become dominant, however that will become more clear as the platforms continue to develop. Thankfully, development for most of the platforms can occur out of one codebase with only minimal work required to port between the interfaces. This is why we’re recommending companies make their skills available on all possible platforms until a market leader emerges.

Voice will be a powerful interface for customer acquisition

Brand marketing will shift from marketing directly to the consumer towards marketing to these AI-based platforms (Alexa) and indicating your “ideal customer” through various signals including branding, price, past buyers, and more.  The platform will then help you get in front of these ideal customers, which is mutually beneficial for both the customer and the vendor.  While we are always a bit skeptical of this sort of algorithmic matching (Facebook’s fake news epidemic and Google’s algorithmic woes come to mind), we have seen Amazon and other online retailers already moving in this direction with “More items to consider” and “Shoppers like you also viewed”.

Taking the current trend of web analytics even further, brands will be able to perform “market research” with existing data and intelligence on actual customers rather than relying on focus groups or other conventional methods.  This could make bringing new products to market more efficient and likely to succeed, given that brands will be able to see holes in the market throughs search data and other measures of consumer intent.

Customer satisfaction will be more crucial to retention

Today, customers don’t often reassess whether a brand or product is still right for them. If it works well enough, then they stick with what they are already using. In the future of AI-powered platforms, the AI can constantly reassess whether another product or brand would be a better fit for the customer and prompt them to switch or even gain their permission to make these sort of substitutions automatically. This radical shift in the relationship between the brand and the consumer means that brands will have to be even more closely aligned with their ideal customer in order to keep their business in the era of AI-driven purchasing decisions.

“Push marketing (getting platforms to carry and promote a product) will become more important, while pull marketing (persuading consumers to seek products) becomes less so.” – Harvard Business Review, 2018

Brands will need to get their products on these AI-platforms to get them in front of customers rather than targeting customers directly through more traditional advertising. While this will allow companies to focus their marketing efforts on these platforms, the potential for getting lost in the black box of the algorithms is a potential concern. In sharp contrast to the past, rather than making selling more products and maximizing throughput, the goal for brands now will be maximizing the depth of their relationship with the consumer. This will ensure their products continue to be suggested to these consumers and new products put in front of these same consumers in the future.

Ready to get started?

Reach out using the contact form below or email us directly (hello@alphaparticle.com) and let’s talk about how we can help your business develop a voice interface to better serve your customers.

Working as a Freelancer With an In-House Development Team

I’m happy to be able to contribute to other blogs and publications, and this article on Simple Programmer was no exception. In many cases, we come in to work with existing development teams or coach other developers on navigating this process. Looking for some help augmenting your dev team? Don’t hesitate to reach out.  Now, let’s get to the post:


At some point as a freelance developer, you will likely be asked to apply your expertise to a team comprised of full-time employees.

Joining a team like this as a contractor or freelancer can be a great opportunity. You can find new peers in your industry and get a chance to apply your knowledge to a new and challenging problem. If you are a senior developer, this can be a great chance to mentor a junior team.

However it can also be frustrating. You might struggle to fit in if everyone on the team already knows each other; not truly being a part of the company can create some tension. Navigating these technical and interpersonal minefields successfully can turn a potential disaster into a very rewarding experience.

Take note that any experience you earn will largely depend on why your client decided to hire you in the first place. It’s important to know why you were brought on and how you fit into the project.

Why Do Companies with Full-Time Employees Bring on Freelancers?

There are a few reasons companies with an existing full-time staff hire freelancers. They essentially boil down to their current staffing arrangement not being a perfect fit for their existing needs.

Need Experience in a New Stack

For example, if the full-time staff has years of experience in Java, but a new project at the company is being built in PHP, companies will often hire an experienced PHP developer to help the staff get familiar with the new stack.

A freelancer can be a good intermediate step in this process, helping to get the existing team up to speed on the new technology more quickly. With a knowledgeable freelancer at their back, the company won’t be forced to hire all new developers or even commit to hiring any full-time employees until it’s decided that PHP will be a firm direction going forward.

A Project Is Behind

If a project is behind deadline or moving in that direction, companies will sometimes hire freelancers to augment the existing team, thinking that having more developers on the project will make the end result ship faster.

As detailed in The Mythical Man-Month, this scenario rarely works out that simply; nonetheless, you may be in a position where time is the most important requirement. If the project is already behind when you join the team, that’s a very important fact to know when you begin since it can affect some of the engineering decisions you make and even some of the office political pressure you’ll be under.

Trying to Expand the Existing Team

Hiring is hard. Especially with the explosion of people learning to be developers, it can be difficult to distinguish superior developers from mediocre ones.

One of the ways companies deal with this problem is to hire freelancers on a “contract-to-hire” basis. Contract-to-hire arrangements define a period of time where a contractor (or freelancer) will work for a company. When that period ends, the employer can decide if they want to hire the contractor on as a full-time employee.

With a contract in place, the employer’s risk is reduced because, by the time they decide to hire the contractor full-time or not, they already know how well they work. Many freelancers prefer this as well since they don’t have to decide to take a job not knowing whether they’ll fit in at the company’s environment or not.

Whatever you decide to do as a freelancer, make sure you know what you’re getting into. When you start as a temp or a contract-to-hire, being aware of your situation and the employer’s needs can be an important step in making the project run smoothly. Each of these arrangements implies a different political landscape for you to navigate as a freelancer.

Dealing With Politics

Many freelancers strike out on their own to avoid the corporate politics that plague many full-time employees. Choosing to work for a company as someone who isn’t as tightly integrated into the team as the full-time members means that it can be difficult to get your ideas accepted or your contributions welcomed if the team feels threatened by your presence.

In addition, it can be difficult to find out where you fit into the team if you were hired by someone higher up in the company. Depending on the team’s size, you may get passed around between different projects with no real direction unless there is someone you can specifically report to.

In any environment—but especially as a freelancer who’s new to the team—soft skills are critical. Things like figuring out how you can best help the direction of the team, who you ask for help, and how you fit into the overall team structure are crucial to ensuring a successful project. Working as part of an existing team means determining your role among them and how you can best help move the project forward.

Are You “The Expert”?

Thinking back to our scenarios above, have you been brought in to lead a team of developers transitioning to a new technology, or are you joining a team where you’re more of a hired gun?

As developers, we all have opinions about how architectural decisions should be made or how projects should be run. If you’re leading the team, you probably have more ability to voice those opinions and have the project move in that direction. However, if you’re just working as another set of hands on an existing team with leadership in place, you might find your suggestions don’t carry much weight, especially when you’re first joining the project.

Either way, it’s important to not make snap judgements about the state of the project that you’ve been brought into. If you’ve been writing code for any stretch of time, you know that real-world constraints often force developers to write code they’re less than proud of.

Jumping into a project and immediately criticizing all the work that’s been done is a surefire way to instantly become the least popular member of the team, and you’ll have a hard time changing that negative first impression.

Who Can You Go to for Help?

As you get introduced to all the new pieces of this project (workflow, code, people, and schedule), it’s important to have someone on the team that you feel comfortable going to for help or to ask any questions about your new working environment. Sometimes, this is the person you directly report to, but often a peer or someone else more familiar with the project can be a better resource.

Whoever this person is, it’s important to either find them yourself or ask the person who hired you to identify the best resource for you. Struggling with getting added to a project and wasting time when someone already familiar with the project could have answered your question isn’t a good way to start off your contract.

Day One on Your New Team

As with starting any new job, there will be plenty of housekeeping tasks to keep you busy on your first day.

Paperwork is a certainty, but you’ll also deal with getting your computer set up with their version control system, VPN, ticketing system, and build tools; you may even get a company email address! This all happens before you write a single line of code.

It can be overwhelming, but hopefully the company will have documentation in place to help you get acquainted with their existing procedures. If now, find a point of contact on the team who can help you with any concerns you have while getting set up.

Once you’re up and running on all the company’s systems, it can be helpful to start looking through the ticketing system. By looking at which types of tickets are coming in more frequently, you can get a sense of the current priorities of the project.

Perhaps more notably, tickets that have been sitting in the queue for a long time—especially ones with lots of back-and-forth discussion—may be areas of the project that are complex or particularly contentious.

Once you’ve surveyed the project and all the open tickets, you might find there’s a ticket with a limited scope that you can use to experiment with the codebase.

Working on a simple ticket and actually digging into the code will give you a sense for how the project is structured and any local tooling that you forgot to set up. In addition, the questions that this initial investigation creates will help you get up to speed quicker on the codebase and workflow in general.

If you manage to solve an existing small ticket, that’s great! Check with any documentation you received at the beginning of the project to make sure you’ve followed all requisite development practices. Try to bring in someone else on the team and make sure there aren’t any code standards or anything else that you should have been following for this first contribution. If there is a formal code review process, your code is probably ready for that at this point.

Going through this process may take you longer than just your first day, but once you’ve gotten even a simple contribution all the way through the process, you’re well on your way to getting familiar with the codebase and the project as a whole.

Getting up to Speed on the Codebase

If you’ve looked through the ticketing system and even made a contribution, you should have a clearer idea as to how at least part of the codebase is structured. Now is the time when you should start figuring out how to make larger contributions. One of the best ways to do this is to work with your new teammates.

Asking questions like “What is the most important thing being worked on right now?” or “Where is a lot of effort being spent without much result?” can give you areas to investigate. These questions are also important to ask your direct supervisor if it’s appropriate. If you’re reporting to a project manager or tech lead, they might have visibility into other areas of the project that your peers don’t.

In addition to asking these questions, pair-programming with a teammate as they work through one of their existing tickets can be a good way to find things to watch out for in the current codebase. This can also illuminate workarounds that existing team members have developed to help them work faster.

If you’re not in a position to eliminate these workarounds with more proper solutions (see “Are You ‘The Expert’?” above), you can at least use them for now to make yourself more efficient until the root cause can be addressed. As you’re pairing, ask them to take you through the entire process of how the application is loaded, including pulling in dependencies and anything else required to render the application, if you have time.

Asking intelligent questions and getting thorough answers is one of the quickest ways to get immersed in the codebase and start making a meaningful impact. Be sure to have a point of contact who is familiar with the code and the workflow who can help you.

Moving Forward

Working as a freelancer with an in-house development team isn’t always easy, but it can be rewarding. Many freelancers turn these projects into full-time jobs, and even if it doesn’t end up that way, you can take pride in helping a team ship a product with the flexibility of still being open to other opportunities.

However, operating as an outsider alongside a larger team is not without its challenges. If you can bring the soft skills of getting involved with the team and helping them accelerate the project as well as the technical skills to get immersed in the codebase quickly, you can have a large impact on the team as a whole. This means a successful project for you, the company that hired you, and your newfound peers.

It can be a challenge, but getting exposed to a different way of working can make you a better developer and a more well-rounded contributor in the future. Make sure to stay in touch with your former client in case you can help out in the future, and you can turn this win into an even bigger one down the road!


This post originally appeared on SimpleProgrammer.com and all images are courtesy of SimpleProgrammer.com