Freelance Web App with Rails API 5.1 and React Frontend, Part 1: Getting Set Up

It’s time to get started with the Rails API and React front end. In Part 0, I gave some background about the project, what technologies would be used and why. Feel free to check it out if you haven’t already.

Prerequisites

To get started with this project, you will need the following installed on your system. Let’s get downloading!

  • Ruby – I will be using version 2.4.2 for this project. rbenv is a popular way to manage your Ruby versions, but RVM is still an option. I recommend reviewing the two options and deciding for yourself.
  • PostgreSQL – PostgreSQL is a robust, feature-rich database system, and it’s the one I’ll be using.
  • Postman – Postman will make it easier to build the API and test out the API calls.

Get the right version of Rails

For this project, I’ll be using Rails 5.1 (currently the latest is 5.1.4), so if you don’t have it, be sure to install the correct version:

gem install rails -v '~> 5.1'

Set up the API app

Let’s go ahead and generate our new API app:

rails new freelance-api --database=postgresql --api

Not too many changes here, just setting the database to Postgres and using API mode. For testing, this project will stick to the default MiniTest.

Go ahead and look at the directory structure in your text editor or in your terminal with tree. If you’ve worked with Rails for regular web applications, you’ll notice this app is a lot slimmer.

The first changes to make are with the Gemfile and the CORS initializer:

Uncomment the gem rack-cors line in the Gemfile and run bundle install in your terminal.

And in the API directory, open config > initializers > cors.rb, uncomment and modify it to read:

Rails.application.config.middleware.insert_before 0, Rack::Cors do
  allow do
    origins '*'

    resource '*',
      headers: :any,
      methods: [:get, :post, :put, :patch, :delete, :options, :head]
  end
end

This will allow the API to play nicely with the front end app. The origins can be adjusted once you know what domain you’ll use for the front end app and are ready to deploy.

Version control and documentation

While this API needs a lot of work before it’s done, it’s a good idea to get in the habit of updating the documentation and keeping track of changes as we go.

You can start by creating a repository in GitHub or another repository hosting service that uses git. It should be fairly straightforward:

Before adding the files to the repo, it’s a good idea to start on some of the basic files you may not feel like working on as the project wraps up: the README, LICENSE, and CONTRIBUTING files.

Your README should already exist, but go ahead and modify it to make sense with what you have so far. For example, right now mine looks like:

# Freelance API

Make your freelancing more efficient by managing leads, proposals, project documents, clients and more.

*This is a work in progress.*

## Getting Started

### Prerequisites

#### Ruby ~> 2.4

Download and manage via [rbenv](https://github.com/rbenv/rbenv) or [RVM](https://rvm.io/)

#### Rails ~> 5.1

    gem install rails -v '~> 5.1'

#### PostgreSQL ~> 9.6

Follow the [instructions for downloading PostgreSQL](https://www.postgresql.org/download/) based on your operating system, and be sure to [create a database user with privileges](https://wiki.postgresql.org/wiki/First_steps).

### Installing

Clone the repository:

    git clone https://github.com/chznbaum/freelance-api.git
    cd ./freelance-api

Install the gems:

    bundle install

And set up the database:

    rails db:create
    rails db:migrate

Start the development server:

    rails s

You can test this by making a GET request to `localhost:3000` using Postman or an alternative.

## Tests

### End to End Tests

TBA

### Coding Style Tests

TBA

## Deployment

TBA

## Built With

* [Rails](http://rubyonrails.org/) - Web Framework
* [rbenv](https://github.com/rbenv/rbenv) - Environment Managemet
* [Bundler](http://bundler.io/) - Dependency Management
* [Heroku](https://www.heroku.com/) - Deployment Platform
* [Travis CI](https://travis-ci.org/) - Continuous Integration

## Contributing

Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on the code of conduct, and the process for submitting pull requests.

## Versioning

TBA

## Authors

* **Chazona Baum** - Initial work

## License

This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for more details.

## Acknowledgements

There’s still a long way to go, but already a surprising amount can be included!

Go ahead and create a CONTRIBUTING.md file and a LICENSE.md file in your project root. My CONTRIBUTING file just lists TBA, and I am using the MIT license for my project.

Now that these documents are set up, the files can all be added to the repository you created.

git add .
git commit -m "initial commit"
git remote add origin https://github.com/<YOUR GITHUB USERNAME>/freelance-api.git
git push -u origin master

Wrapping up

You’re almost done with the basic setup! To create and update the database, go ahead and run:

rails db:create
rails db:migrate

It seems like we’ve done a lot without much to show for it, but we’ve set up the environment we’ll need to start giving the API functionality.

At this point, you can test the API out by opening Postman and starting your Rails server in the terminal:

rails s

Once the terminal indicates the server is running, in the Postman request bar, send a GET request to localhost:3000. You should see the following:

Look deeper into the HTML you received, and you’ll see it’s Rails’ Yay, you’re on Rails! success page.

With that accomplished, the next step is to actually plan out what the API should do in a little more detail and actually start creating the data models.

Freelance Web App with Rails 5.1 API and React Frontend, Part 0: Why?

In my rush to try to earn something from my code learning so far, I’ve had to shelve some of my plans to dive deeper into Ruby on Rails and to learn front end frameworks like Angular and React. When marketing freelance work to non-coders, the most common request seems to be to make something fast, pretty and cheap, which means focusing on static or WordPress sites.

But I’m finding that trying to reach out to potential clients the traditional way is extremely inefficient, especially when conversion rates as a newbie freelancer are basically nonexistant. And solutions for non-technical folks to manage their clients and improve their sales are expensive when it might take several months to actually earn anything.

So it seems to make sense to go ahead and put together an app that will allow me to keep track of and reach out to leads, as well as managing proposals and project documents, and provide clients with access to materials relevant to them. Once the basics are there, I’ll look at integrating an invoicing API like FreshBooks to get more out of the app.

And since it’s been a while since I’ve blogged about what I’m learning, this seemed like a great opportunity.

Over the next few weeks I’ll be building a freelance dashboard app, using Rails 5.1 for the API backend, and React.js on the front end.

To follow along, it’s a good idea to be comfortable working with your terminal and console, have a basic understanding of how a simple Rails app works, and to have a working knowledge of JavaScript. But I’ll try to keep things simple just in case.

You can either wait for the next post to start building, or feel free to dig a bit deeper into why I went with this stack below:

Why Rails?

I’m setting aside for a moment the fact that I love working with Ruby, since it’s not always going to be the best tool for the job.

For one, Rails is pretty fast and easy to set up compared to other options. Given that I intend to use this toward trying to make some income soon, being able to get something up quickly and iterate improvements is a huge advantage.

If I wanted to build the back end and front end in the same application, Rails makes that easy, too. Starting with 5.1, Rails includes Webpack and makes it easy to create apps that use front end frameworks and libraries like Angular, React, Vue, or Ember rather than jQuery. I’ve played around with that a bit, but I think I’d still rather separate out these two key parts into an API and a front end app.

Rails is also well-known for being a great framework for building RESTful APIs. Rails 5 even provides an API mode that makes getting a JSON API up and running even smoother.

And I have some basic experience building applications with Rails. While I want to learn new things, having some familiarity with the back end will help get this built quicker, especially since I’m picking a front end framework I have no experience with.

If I didn’t go with Rails, I would have considered:

  • Phoenix – Phoenix is built on Elixir, and the framework was recently brought to my attention as a more performant alternative to Rails. I definitely want to play around with the pair some, but possibly on a different project. Plus, the performance issues with Rails are generally more of a problem when it’s a lot bigger than I expect this to get.
  • Node.js – When working with JSON and a JavaScript front end framework, a JavaScript back end would seem like a convenient option. I’ve only played around with Node in small pieces, like building Twitter bots. I want to explore Node more in the future, but Rails’ benefits edge out for this project.
  • Sinatra – Sinatra is another Ruby-based option, one that’s supposed to be great for tinier applications where Rails might just get in the way. I’m expecting this app to be a little larger than one I might use Sinatra for, though I’m eager to try it out on a project one of these days.

Why React?

React has gotten a lot of attention lately between concerns about licensing issues (which it seems were ultimately much ado about nothing) and its parent company Facebook constantly ending up in the headlines.

It seems most companies that have gone with JavaScript front end frameworks have landed with either Angular or React. From a pragmatic perspective, there are more jobs out there for either one than for, say Vue.

React is also a more mature framework than Vue, while having fewer breaking changes than Angular.

And React also has the benefit of React Native. If I wanted to implement a native mobile app for this web application, which I eventually do, React Native makes it much easier.

If I didn’t go with React, I would have considered:

  • Vue.js – Vue, like Ruby, is a joy to use. I’ve tinkered with it before to build a simple Pokémon-based monster battle game. If I wasn’t going with React, I would have definitely gone with Vue to have the most flexibility with my front end.
  • Angular – The other pragmatic approach, Angular is used by a lot of companies I know. It’s also frequently paired with Rails.

Other Considerations

I considered breaking this project up into microservices to make individual features easier to develop and maintain, and initially started it that way. However, it’s been brought to my attention that microservices should be left to situations where teams of people will be working on different services, so this project will take a monolithic approach.

And since I’ll be attending RubyConf this week, posts about this project may be interrupted by posts about the conference.

This should be a fun project to work on, and I look forward to showing you along!

Roughing It Dev Style: Coding Without a Computer

A terrible, horrible, no good, very bad thing happened to me this month: the Surface Pro that was my exclusive personal and work computer died. Even though I found an inexpensive replacement to order, I couldn’t imagine not coding for about a week. I was totally freaking out.

After a few unsatisfying days of merely reading about web development topics, I set to work on figuring out how I could get some real work done, armed only with my iPhone.

Trial and Error

Not wanting to hunt down a new solution if it was unnecessary, I first turned to tools I’ve used before:

Codepen

Codepen wasn’t terrible in a pinch. However, the layout and on-screen keyboard really wasn’t optimized for coding on a phone. Making minor changes took a lot of time, and it was easy to end up with really buggy code.

Cloud9 IDE

Cloud9 had the same problems but with an already more cluttered interface. It explicitly doesn’t support mobile browsers and it has been made clear multiple times that making Cloud9 mobile-friendly or developing a native mobile app for it is not anywhere on the roadmap.

CodeAnywhere

When exploring the Cloud9 issue, I saw mention of an alternative cross-platform cloud IDE called CodeAnywhere. However, the iOS app appeared to have not been updated since 2014 and after four unsuccessful attempts to so much as create an account and login, I figured there had to be something better.

What Finally Worked

After some frustrating experiences, I came across tools and techniques that worked beyond my expectations, allowing me to code productively with just a phone and that can allow you to do it, too.

Editor: Buffer Editor

Buffer Editor was a hidden gem in the App Store. It didn’t have a presence from advertising or word of mouth or even enough reviews to warrant a rating. It would be easy to underestimate Buffer.

When actually downloaded, though, it’s clear that Buffer Editor is a real, powerful editor truly optimized for a mobile experience. Features that won me over included:

  • An extended keyboard that makes typing quick (for one-handed) and intentional.
  • A well-designed built in terminal.
  • Full-screen editing, a necessity on such a small screen.
  • Syntax highlighting and autocomplete for 40+ languages and technologies.
  • The ability to connect via SSH, FTP, SFTP, GitHub, Dropbox and iCloud.
  • Sending files by email.

It also included useful features that I didn’t utilize such as Vim support and support for bluetooth keyboards.

While it isn’t free, the 4.99 USD price I got it for was well worth it, especially compared to the popular Coda at almost 25 USD.

Server: Digital Ocean

Being confined to a phone gave me the push I needed to start building more websites and applications directly in a VPS solution like Digital Ocean rather than starting out solely on a local machine. Using a Virtual Private Server (Digital Ocean calls them droplets) allows setting up a development environment and managing files/servers without needing a local environment to work with, since it’s on another computer in the cloud. Another benefit of VPS is that from the start, a project isn’t tied down to one local machine, even if it hasn’t been checked in to version control yet.

Digital Ocean has proven to be easy to use (and inexpensive), and there is a wealth of documentation and guides for folks new to running servers.

Accessing the droplet remotely was easy in Buffer Editor, simply involving adding a new SSH connection and filling out the relevant settings with information from the droplet.

Browser Access: Local Server Binding

The wonderful and problematic thing about web development is, of course, the primary thing you need is a browser. Using a VPS to develop, you have access to a terminal console, but not a Graphical User Interface or a traditional web browser.

When developing a web app, you often need to be able to start a local server and access localhost in the web browser, since that’s where the development server displays the app. Unfortunately localhost means this computer, and so it’s inaccessible outside of the VPS hosting the app files.

The workaround here involved learning more about servers and requests:

When starting a server, it sends what is called a bind request to indicate that it is ready to receive requests associated with an IP address. Local servers typically bind to 127.0.0.1 because that IP address is used to loop back to the requesting computer; each computer can only request 127.0.0.1 from itself, which is usually convenient for developing.

That’s clearly not an option. You need to have the server bind request indicate its publicly accessible IP address instead so that it’s accessible from a browser outside of that computer. This IP address is easily obtained from Digital Ocean, and the workaround involved adding a --binding flag like this, for starting a Rails server:

rails s --binding=XX.XX.XX.XX

Now, instead of typing localhost:3000 into the browser’s address bar to view the app, you would type the publicly accessible IP address, like XX.XX.XX.XX:3000. As long as the local server is running with the binding flag, the app will be accessible remotely from the server’s IP address.

Buffer Editor is especially convenient by keeping the server running after you back out of the terminal (so you can open it again and continue to develop). This can be confusing at first, but to stop the server, lock and unlock your phone.

Get Back to Coding Productivity

From here, you can utilize git like usual, get to work and debug your code wherever you are and regardless of WiFi/Ethernet availability. In fact, the day before my new computer arrived, I was able to make 6 commits and push them to GitHub, all from my iPhone.

While my day to day coding will be on a traditional desktop or laptop when it’s available (since two hands are faster than one), I continue to utilize these tools to work on my projects when I can’t bring my laptop somewhere or when mobile internet is all I’ll have access to, such as riding in a car.

I’m actually thankful for the experience of losing access to the computer, because it forced me to find a solution that now allows me to code virtually anywhere.

Getting Started with Jekyll, Part 1: Meeting Jekyll’s Demands

When this left off in Part 1, you had a basic Nginx server running and set up with remote access and a firewall.

So now that you have this awesome server, you should put some stuff on it.

Precious Rubies

First you’ll need to make sure you have the Ruby programming language installed both on your server and on your local computer. While you can have multiple versions of Ruby installed, you need both server and local to be using the same version. This will take some time to correct later since you’ll have to reinstall almost everything, so it’s important to pay attention now.

To install Ruby, you’re going to use the Ruby Version Manager (RVM) which also makes it easy to install additional tools and make updates. To install both RVM and Ruby, follow the instruction on the RVM website which will look something like the following but will change over time. You’ll want to make sure --ruby is added like this:

$ gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
$ \curl -sSl https://get.rvm.io | bash -s stable --ruby

Do the same for both your server and local computer. Before moving on, check which version of Ruby you are running by entering the following command for each system:

$ ruby -v

You should have the exact same version listed for both. If you have multiple Rubies installed, say on your local computer, you can specify which version to use with the following, replacing the numbers with the actual version number you want to use:

$ rvm use 2.3.3

Once you have the same versions being used for both, you can start working on dependencies.

Crucial Pieces

From here, you’re going to be installing and configuring packages called gems.

In your terminal for your local computer, enter the following:

$ gem install jekyll capistrano bundler

Those are set to install the Jekyll site generator, the Capistrano deployment tool, and the Bundler gem manager. You may have received an error message stating you were missing some needed gem. As many times as you get this type of error, just go back into your terminal and install the gem it tells you, then re-attempt installing our three like so:

$ gem install gem_name
$ gem install jekyll capistrano bundler

Now that you have those set up locally, it’s time to see what you’re working with in Jekyll. Go ahead and enter the following:

$ jekyll new blog
$ cd blog
$ jekyll serve

This will generate and serve a basic Jekyll blog locally. Check it out! Go into your browser to localhost:4000, and you should see something like the following:

When you’re done, you’re going to see that when you enter our server’s address. To do that, you’re going to need to configure a few things.

Configuration

Server-Side

In your terminal, log back into your server’s non-root user account. You’re going to need a directory for our site to go in, so go ahead and make one:

$ mkdir www
$ cd www

You can title it whatever you like. In that directory, you’re going to add a couple of additional ones that will be needed for your site:

$ mkdir shared
$ mkdir shared/log

Next, you’ll configure Nginx to know where to look for your site. Open the Nginx configuration file:

$ sudo nano /etc/nginx/nginx.conf

And look for a section that says:

Make sure the next line reads as follows, uncommented and replacing your_site.com with the domain your site will use:

Save the file, and you’ll move on. Now, you need to go into the folders Nginx will look for our site configuration information:

$ cd /etc/nginx/sites-available

There is already a file for the server demo page, but you’re going to use that to create one for our Jekyll site:

$ mv ./default ./your_site.com
$ sudo nano your_site.com

Once in the file, make it look something like the following, replacing user_name with your non-root user’s name and 00.00.00.00 with your server’s IP address or hostname if you’ve already configured it. Uncomment the lines listen 443 ssl; and listen [::]:80; if you have set up SSL certificates for your domain:

I’ve left a lot in here that is not absolutely necessary but is helpful to learn from. Once the file is configured, save it, and you’ll need to use a symlink in your sites-enabled directory so Nginx can find it:

$ sudo ln -s /etc/nginx/sites-available/your_site.com /etc/nginx/sites-enabled/your_site.com

Using a symlink (symbolic link) basically places a file that references back to another file, so the same information can be used in two places (sites-available and sites-enabled) but only needs to be updated in one (sites-available). It’s a great way to save time and reduce errors.

Now you need to make sure your Nginx server can access and the files it needs to run, as well as all parent folders involved. To do this, you’re going to make the Nginx user the owner of the directory you’re going to deploy to, and then give it read and write access to all the directories it will need:

$ sudo chown -R -www-data:www-data /home/user_name/www/current
$ sudo chmod 755 -R /home

Finally go ahead and check to make sure your setup of Nginx doesn’t create any errors, and then restart the Nginx worker:

$ sudo nginx -t
$ sudo kill -HUP `cat /var/run/nginx.pid`

Now that your server is configured properly, you can move on to setting up your deployment tool, Capistrano.

Capistrano

Back on your local machine, in the directory your blog is going in (say, /user_name/blog), go ahead and edit what’s called a Gemfile, which is basically just a file that lists what and what versions of Ruby and gems you’ll need:

$ nano Gemfile

Make it look something like this, keeping any versions, plugins or themes already listed in the file:

source "https://rubygems.org"
ruby "2.3.3"

# Hello! This is where you manage which Jekyll version is used to run.
# When you want to use a different version, change it below, save the
# file and run `bundle install`. Run Jekyll with `bundle exec`, like so:
#
#   bundle exec jekyll serve
#
# This will help ensure the proper Jekyll version is running.
# Happy Jekylling!
gem "jekyll", "3.3.1"
gem "capistrano"
gem "capistrano-rvm"
gem "capistrano-bundler"
gem "rvm1-capistrano3", require: false

# This is the default theme for new Jekyll sites. You may change this to anything you like.
gem "minima", "~> 2.0"

# If you want to use GitHub Pages, remove the "gem "jekyll"" above and
# uncomment the line below. To upgrade, run `bundle update github-pages`.
# gem "github-pages", group: :jekyll_plugins

# If you have any plugins, put them here!
group :jekyll_plugins do
  gem "jekyll-feed", "~> 0.6"
end

Once that’s in place, in your terminal, run the following to create some files Capistrano will need to deploy the site:

$ cap install

Here, you’ll have a bit more editing to do. In your terminal, type:

$ nano Capfile

Like the Gemfile, the Capfile will need to list some pieces required for Capistrano to work. Make it look kind of like this:

Making these changes will tell Capistrano that you’re using RVM to manage ruby versions and bundler to manage gems, as well as which version of Capistrano to expect.

To make sure all the necessary gems are properly set up for use, in your terminal type:

$ bundle update

Then the next file to edit:

$ nano config/deploy/production.rb

Edit it so it looks like this, replacing 00.00.00.00 with your server’s address and user_name with your non-root user:

set :stage, :production

# Extended Server Syntax
# ======================
# This can be used to drop a more detailed server definition into the
# server list. The second argument is a, or duck-types, Hash and is
# used to set extended properties on the server.

server "00.00.00.00", user: "user_name", port: 22, roles: %w{web app}

set :bundle_binstubs, nil

set :bundle_flags, "--deployment --quiet"
set :rvm_type, :user

SSHKit.config.command_map[:rake] = "bundle exec rake"
SSHKit.config.command_map[:rails] = "bundle exec rails"

namespace :deploy do

  desc "Restart application"
  task :restart do
    on roles(:app), in: :sequence, wait: 5 do
      # execute :touch, release_path.join("tmp/restart.txt")
    end
  end

  after :finishing, "deploy:cleanup"
end

# server-based syntax
# ======================
# Defines a single server with a list of roles and multiple properties.
# You can define all roles on a single server, or split them:

# server "example.com", user: "deploy", roles: %w{app db web}, my_property: :my_value
# server "example.com", user: "deploy", roles: %w{app web}, other_property: :other_value
# server "db.example.com", user: "deploy", roles: %w{db}



# role-based syntax
# ==================

# Defines a role with one or multiple servers. The primary server in each
# group is considered to be the first unless any hosts have the primary
# property set. Specify the username and a domain or IP for the server.
# Don't use `:all`, it's a meta role.

# role :app, %w{[email protected]}, my_property: :my_value
# role :web, %w{[email protected] [email protected]}, other_property: :other_value
# role :db,  %w{[email protected]}



# Configuration
# =============
# You can set any configuration variable like in config/deploy.rb
# These variables are then only loaded and set in this stage.
# For available Capistrano configuration variables see the documentation page.
# http://capistranorb.com/documentation/getting-started/configuration/
# Feel free to add new variables to customise your setup.



# Custom SSH Options
# ==================
# You may pass any option but keep in mind that net/ssh understands a
# limited set of options, consult the Net::SSH documentation.
# http://net-ssh.github.io/net-ssh/classes/Net/SSH.html#method-c-start
#
# Global options
# --------------
#  set :ssh_options, {
#    keys: %w(/home/rlisowski/.ssh/id_rsa),
#    forward_agent: false,
#    auth_methods: %w(password)
#  }
#
# The server-based syntax can be used to override options:
# ------------------------------------
# server "example.com",
#   user: "user_name",
#   roles: %w{web app},
#   ssh_options: {
#     user: "user_name", # overrides user setting above
#     keys: %w(/home/user_name/.ssh/id_rsa),
#     forward_agent: false,
#     auth_methods: %w(publickey password)
#     # password: "please use keys"
#   }

And last but not least, you’ll edit:

$ nano config/deploy.rb

Make sure it looks like the following. You’re going to replace user_name with your local username, ruby-2.3.3 with whichever version you are using, the application name of "blog" with your choice if you want to change it, and the address listed next to repo_url to the address of your own repository if you use git (otherwise leave those quotes empty):

# config valid only for current version of Capistrano
lock "3.7.1"

set :rvm1_ruby_version, "ruby-2.3.3"

set :application, "blog"
set :repo_url, "https://github.com/user_name/blog.git"

# Default branch is :master
# ask :branch, `git rev-parse --abbrev-ref HEAD`.chomp

# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, "/home/user_name/www"

# Default value for :format is :airbrussh.
# set :format, :airbrussh

# You can configure the Airbrussh format using :format_options.
# These are the defaults.
# set :format_options, command_output: true, log_file: "log/capistrano.log", color: :auto, truncate: :auto

# Default value for :pty is false
# set :pty, true

# Default value for :linked_files is []
# append :linked_files, "config/database.yml", "config/secrets.yml"

# Default value for linked_dirs is []
# append :linked_dirs, "log", "tmp/pids", "tmp/cache", "tmp/sockets", "public/system"

# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }

# Set stages
set :stages, ["staging", "production"]
set :default_stage, "production"

# Default value for :log_level is :debug
set :log_level, :debug

# Default value for keep_releases is 5
set :keep_releases, 5

namespace :deploy do

  desc "Restart application"
  task :restart do
    on roles(:app), in: :sequence, wait: 5 do
      # Your restart mechanism here, for example:
      # execute :touch, release_path.join("tmp/restart.txt")
    end
  end

  before :restart, :build_public do
    on roles(:app) do
      within release_path do
        execute "/home/user_name/.rvm/gems/ruby-2.3.3/wrappers/jekyll", "build --destination public"
      end
    end
  end

  after :publishing, :restart

end

Finally, go ahead and tell Capistrano to deploy your current site by typing:

$ cap production deploy

Open your browser window and type your server’s address. You should see this:

If you did, great! You set up your Jekyll site to push your local content to your server!

You’re finished for today. Want something to do in the meantime? If you set up your domain name last time, here’s a useful guide by Mitchell Anicas on securing your server with a free Lets Encypt SSL certificate.

Questions, comments or concerns? Continue the conversation in the comments!

Getting Started with Jekyll, Part 0: Servers and SSH

If you’re a developer, there’s a good chance you may have heard of static site generators like Jekyll or Ghost. They claim to cut down on a lot of the bloat you find in behemoths like WordPress and simultaneously give you greater control over your site’s structure and design.

For those very reasons, I have recently migrated my existing WordPress blog over to Jekyll. While WordPress is a massively useful tool for sites that require a lot of features, it’s simply unnecessary for a simple blog.

For those who have never managed their blog in this way, though, it can be pretty intimidating. Thankfully, there are a couple of ways to go about installing Jekyll, and both are manageable.

The easiest way to install Jekyll is to use GitHub Pages. I won’t go over that method here; instead, to have more control over the installation and site itself, we’re going to walk through hosting Jekyll on your own server, an always-connected computer for serving content, or Virtual Private Server/VPS, a piece of a server blocked off from the rest that acts like its own server. If you have never managed your own server before, this is a great project to learn from!

By the end of Part 1, you’re going to have a functioning server you can add a domain name to. In Part 2, we’ll get to the dependencies needed to make Jekyll and our deployments work. Ready for some fun?

Requirements

To follow along with this tutorial, you will need a server or VPS with root access. Basically this means you need administrator access to view and change things like system files on your server. Not sure who to use? My recommendation is Digital Ocean; for $5 a month, you will have more server than you need, full root access, as well as great documentation and support. This tutorial assumes you are able to use a command-line terminal, text editor like Sublime Text or Atom, and a web browser. It also assumes Linux, though folks using other systems are welcome to go through and let me know where you get stuck so we can work through it. You do not need to already know any particular language (though comfort with Ruby and Bash help), so if anything doesn’t make sense, please bring it to my attention.

Set Up Your VPS

To start with, you’re going to need to do some basic setup of your server. In this tutorial, we’re using Ubuntu 16.04, which Digital Ocean and others VPS providers make easy to install while creating your server. Once you receive your root login information (likely by email), log into your VPS using SSH on your console. Grab your server’s IP address, a series of four or six numbers separated by periods that tells your browser where to find your server. For Digital Ocean, this can be found on the Droplets page like this:

You should type something like the following, filling in your server’s IP address in place of the zeros:

You’ll likely see something like the following:

Type yes and enter your root password:

$ root_password

Your terminal should require you to immediately change your password. Enter your original password and then the new one twice. The prompt should then change to reflect your using the droplet like so: [email protected] ~$.

Now, this root user can easily get you in a lot of trouble because it has access to everything on your server and can do anything to it. You don’t want to be the person who accidentally deletes everything on your server by hitting just a few keystrokes.

So what we’re going to do instead is create a new user that can access administrator privileges, but only deliberately so accidents are less likely to be catastrophic. Choose a username, and in your terminal, type:

$ adduser user_name

You will be prompted for a password, and then you’ll see additional information requested:

Enter what you like; it’s okay to leave them blank if you prefer. Then we can get to adding privileges. In your terminal, type:

$ usermod -aG sudo user_name

This will add user_name to the user group sudo which by default has superuser or administrator privileges. To access those privileges, when you’re issuing a command you’ll type sudo in front of it, like this:

$ sudo command

The first time you use this in each session, you will have to enter your user’s account password. The prompt will look like:

Before we log out of the root user and into the one you’ve created, it’s a good idea to connect an SSH key pair to the server. These are a set of codes in which one is stored locally (the private key) and one is stored on the server (the public key). In order to log in with them, the user must have the right private key that matches up with the server’s public key. These codes are much harder to crack than passwords, so they make your server more secure against brute-force attacks.

If you don’t already have a key pair, you’ll need to go ahead and create one. In a separate local terminal window (not connected to your server), type the following:

$ ssh-keygen -t rsa

You will see a couple of prompts:

For the first prompt, just hit enter. The second is largely up to you. A passphrase could potentially provide more security, say if your local computer was hacked or stolen, but you would need to enter it every time you use the key pair.

When the keys have been created, the public key will be stored at /home/user_name/.ssh/id_rsa.pub and the private key will be at /home/user_name/.ssh/id_rsa. These can be stored in a password manager vault as a backup. Now that you have them, you’ll need to copy the public key to your server under each of your accounts. While still in your local terminal window, type the following, replacing 00.00.00.00 with your server’s IP address:

$ ssh-copy-id [email protected]

You should see something like this:

Type yes. Then you’ll see:

Enter the password for user_name. You’ll see:

At this point, in your server terminal window, you can go ahead and log out of the root account and try to log back in:

$ exit
$ ssh [email protected]

You should not be prompted for a password, though you will for a passphrase if you created one. Go ahead and do these same steps again starting with copying the public key to your non-root user, so that both accounts can be accessed without a password.

Important: Only complete this next step if you were **successfully* able to log in without the password on your accounts. Otherwise you can lock yourself out of your own server.*

The next thing you can do, which is optional but generally a good idea, is to disable password-logins for your root account. This ensures that someone won’t be able to gain administrator access to your server just because they cracked your password. If you do this step, using your server terminal window, open the config file for SSH:

$ sudo nano /etc/ssh/sshd_config

Look for PermitRootLogin and modify it so it says:

Then to make the changes effective, type this into your terminal to reload SSH:

$ sudo service ssh reload

Now that we have this out of the way, there isn’t much need for you to log in to the root account, so get out of there and log into your non-root account.

Configure Your Server with Nginx

At this point, you should be logged into your server as your non-root user. Now, we’re going to install the Nginx web server, which will be pretty easy. In your terminal, type:

$ sudo apt-get update
$ sudo apt-get install nginx

apt-get is a package manager that will help us install things; as long as the name of the program we need is in its listings, it should be able to install it for us. By updating it first, we’re making sure we have the most up-to-date listings to pull from. The second command handles the actual installation, including any needed dependencies. Note: If you get a dependency error installing anything in this tutorial, you may need to look at the error and install the dependency prior to installing the program you want. Just be prepared.

Now’s a pretty good time to make sure we have a firewall set up, since we’ll need to make sure Nginx can get through it. A firewall keeps track of traffic in and out of your server and makes sure that only traffic that meets certain rules can get through. The tool we’re going to use to manage our firewall is ufw, which comes built-in.

First, go ahead and check the status of your firewall by typing:

$ sudo ufw status verbose

You should get a message stating it is inactive. We’re going to deliberately enable it. First, we need to make sure that traffic can get in from the ports we need. Ports are specific endpoints for traffic in and out of the server; for example, emails are generally handled from a different port than your published content will be.

First, since we are accessing our server remotely through SSH, a secure shell within our terminal, we need to make sure we can keep accessing that way:

$ sudo ufw allow ssh

Since SSH requires port 22, that command will allow connections from that port. We’ll also make sure that http connections (the kind you usually make in your browser) are allowed. Like with SSH, ufw knows what port to listen to for http connections (80), so you can type:

$ sudo ufw allow 'Nginx HTTP'

If you have or get a SSL security certificate for your site, which will connect users by https instead and is generally a best practice these days, you will also need to keep its port open:

$ sudo ufw allow 'Nginx HTTPS'

We’re not going to worry about File Tranfer Protocol/FTP connections, used for transferring files back and forth with your server with no encryption (not a great idea anyway), or other ports because we’re going to deploy our site’s files right from SSH.

Now that we have our needed ports open, go ahead and enable the firewall:

$ sudo ufw enable

If you go back and check the firewall status again, you should get output like this:

Now we should be able to check with our system to see that Nginx is running. Type:

$ systemctl status nginx

You’ll get a detailed output, but the piece you want to look for is Active: active (running) since. Just to be sure, go ahead and see if you get Nginx’s default landing page. In your browser window’s navigation bar, type your server’s IP address (should look like 00.00.00.00) and hit enter. You should see something that looks like this:

If you did, great! Your server is working correctly.

We’re finished for today, in Part 2 we’ll get all the dependencies needed and install Jekyll, and Monday we’ll get the blog posts and changes deployed! Want something to do in the meantime? If you have Digital Ocean, here’s a useful guide by Etel Sverdlov on setting up your domain/hostname on your server.

Questions, comments or concerns? Continue the conversation in the comments!

Ways to Involve Young Kids When Coding

Motherhood is a nonstop guilt trip, and few things cut as deep as not seeming to spend enough time with your kids. Combine that with the time-consuming nature of coding, debugging, and research, and you have a recipe for a stressed-out mom.

What’s a mom to do? Here are some ways I keep my own kids engaged while I work at home.

Let Them Code, Too

I don’t know about your kids, but my four-year-old absolutely loves puzzles. So when I’m working on code, I break out a tablet for him to play ScratchJr.

Scratch and ScratchJr are free programs created by MIT that allow children and newcomers to code to write logic using puzzle pieces. Kids and adults alike can utilize things like loops, conditional statements, and functions to create a vast variety of programs and games. The individual puzzle pieces only fit where the logic makes sense from a coding standpoint.

Scratch can be played with in a web browser, or you can download the software to use it offline. ScratchJr is designed for tablets and works on iPad, Android, and Kindle Fire. Each one organizes pieces into various color coded categories to make it easier for kids to pick it up quickly. And the best part for a kid learning to code: no semicolons!

Since the Jr version doesn’t require reading comprehension, my son likes it for controlling the avatars and making his own mini games. And he loves feeling like he’s doing his “work” while Mommy’s doing hers.

Usability Testing

Almost everyone who builds products for some user to eventually, well, use knows that they’re not always all that good at figuring out how they’re supposed to do it. Even by having others try out your site or software, it’s almost impossible to know how your genuine users may try to do things differently.

One way to at least check that a user’s wrong clicks won’t break your site or program is to simply click around randomly and see what happens. Do you know who are really good at randomly clicking things?

Kids, of course. Let them tool around with your front end and watch from behind them to see if anything breaks. My sons are always trying to snag my laptop so they can press as many keys as they can, so giving them the opportunity to click and tap away with no consequences is a really special treat.

Engage Little Builders

When kids are going to be underfoot, it helps to get them completely absorbed in what they’re doing so they’re less likely to try to wander off. This makes Legos and other building toys great choices for when Mommy needs to get work done at home. Once they start constructing something, it usually hold my kids’ concentration for an extended period of time.

Plus, not only do they have something to focus on for a while, but as long as they get a chance to show me their finished projects and get my smile of approval, they’re as proud and happy as if I had been beside them the whole time. Keeping a camera (or camera phone) by the computer also helps, because then I can take a picture of whatever they’ve built when they present it to me; kids tend to love when their creations are special enough that Mommy wants to take a picture of them.

Let the Little Artist Out

Doing some whiteboarding or wireframing or need to create some flowcharts to get the hang of your algorithm? Young kids often don’t realize exactly what Mommy’s doing when she starts drawing out what she’s working on, but it can definitely inspire them to draw their own little masterpieces.

When I break out my drawing tools, I also take out some crayons for my sons. They get to draw anything they can think of while Mommy puts some flowcharts together. Most importantly, they feel like they’re doing something just like Mommy, so they don’t feel left out when I’m working.

A Little Quiet Time

Of course, not all of code work is active. A lot of time is spent reading books and documentation and doing research. As soon as I appear not to be doing anything, my kids decide that I need them to spice things up a bit.

How do I keep them still? When I do my reading, I try to sit my boys down to do their reading.

My oldest is starting to read words and my youngest likes to look at pictures, so if I give them a good stack of books based on their interests, I can usually get them to enjoy the quiet time long enough for me to make progress.

None of this is to dismiss the fact that working at home while simultaneously taking care of kids is hard work. These options help to make it possible, but I still stay up several hours after my kids go to bed so that I can concentrate for a bit.

Let me know on in the comments if these helped you or what you’ve tried, or even work from home horror stories.

The JavaScript Boilerplate to Use if You’re New to Full Stack

Novice JavaScript developers and those just getting started handling the server side can easily fall into the trap of JavaScript fatigue. With the numerous components that make up a full-stack JavaScript application, aggressive fan-bases, and new tools constantly being hailed as the new “must use” thing, it can be difficult for someone starting out to weed through the hype and figure out exactly what they should learn and use for their first applications. When they come across frameworks and boilerplates that are supposed to make things easier, new developers often find that they need to conquer a mountain of new information before they can even attempt them.

Enter Clementine.js: a relatively recent addition, developed by Blake Johnston in 2015. This boilerplate takes simplicity to the extreme, and is easily one of the lightest of its kind.

For example, while it has three versions to choose from, the default is built with Node.js, MongoDB, and Express. There are no preferred front-end frameworks to commit to, and no extraneous technologies to bulk up the project and get in your way, although you can select an Angular version if you favor the MEAN stack.

Students learning full-stack JavaScript through the free (as in speech and beer) program Free Code Camp, who are used to being told to learn by doing, will be delighted to see that there is a version hand-crafted for them. In it, the standard version is paired with Passport for secure authentication through GitHub and Mongoose for modeling data in MongoDB. Clementine.js and Free Code Camp are great for each other.

Each version lays out clear installation instructions and strong documentaion. There are tutorials for incorporating Angular, authentication, deploying successfully to Heroku, and for the absolute beginner to walk themselves through putting together what Clementine.js installs for them. Watch that space in the future, as tutorials on testing in Mocha and incorporating React.js for MERN stack projects are soon to come.

Will this be the framework you use for every project you ever make? Absolutely not; there are good reasons why many of these other boilerplates and projects become so bloated, and not every tool works well for every project. But if you’re a beginner and you’re not utilizing Clementine.js, you are making more work for yourself.

Like what Blake has done with Clementine.js? Consider contributing to the project. As with other open-source projects, it can only survive with the help of meaningful contributions.

What are you building today? Continue the conversation in the comments!

Get Involved! Forming a Local Coding Group for Growth and Community

When learning to code, you’re often your own worst enemy. Speedy learning slows to a crawl and you start to wonder if you were cut out to code at all or if you were really an imposter all along. One good way to avoid this self-defeating spiral is to surround yourself with fellow coders as often as you can manage.

Of course, there’s no limit to the support you can find online, but there may be times when you really need a person there with you sharing their own struggles, helping you on bug hunting excursions, or to contribute to a project with you. If you don’t happen to live right near an existing code group, however, don’t despair. We’re makers; you can make your own!

Now, I’m sure someone out there is probably saying, “Gee, Chazona, I’m sure forming a group is easy in a big city, but there’s no way I can find interested people in my [insert suburb or town here].” And unless that person is in extremely remote conditions, they’re probably selling their area short. My advice is not to assume lack of interest before you actually try it because you’ll probably surprise yourself.

So what I’m going to do is share with you how I went about forming a local group when I moved to a village outside of Richmond, VA, and some of the things I learned along the way. While everything may not apply to every situation or location around the world, it should give you an idea of where to start if you’re feeling a little lost.

A Little Inspiration

I’m not going to claim that I came up with the idea to create a local group all on my own. As a student of Free Code Camp, one thing that was consistently recommended throughout the program was to code socially: seek out local groups, pair program, contribute to group projects. Around the same time, my family was in the process of moving from our Richmond suburb to a slightly more distant village. The nearest local coding groups were more than 30 minutes away and met late in the evening, which for a full-time parent without childcare was just not accessible. What was I going to do?

Well, Free Code Camp also had a listing of existing Free Code Camp unofficial groups and recommended that those who didn’t have a group within 15 – 20 minutes of their home just create their own. The recommended medium was Facebook Groups, largely because it’s free to use and an incredible number of people are already using Facebook for other things. So I went on Facebook, followed the Free Code Camp guide and set one up in a matter of minutes.

A Change of Tactics

Once the Facebook group was up, it stagnated a bit, growing by one or two people at a lethargic pace. As a natural introvert, I was not well-suited to heavily marketing my group, and the tools available through Facebook were not always helpful. While I had only expected a handful of people to eventually show interest in the group, I was starting to wonder if there was no interest in code where I lived.

But I wasn’t ready to give up yet. Instead, I took a look at Meetup, a web app I had considered when I’d first moved to the Richmond area but had never gotten around to joining an area group. It seemed simple enough to set a group up, so I filled out the information up to the last screen, expanding the group to be called Midlothian Code and include coders learning a wider variety of languages. I stopped short of purchasing the organizer subscription, and you’d do well to do the same if you utilize Meetup: within 24 hours of abandoning my cart, I got an email offering 50% off my first payment, which I used to pre-pay for six months for $30.

The thing Meetup did for my local group was to bring in interested people. Three days after I formed the group on Meetup, they announced the group to users who had already indicated interest in things like web development and programming, and Midlothian Code grew to almost 30 members overnight.

Suddenly I was in the position of needing to get meetings and events up and running, and of course, I had no idea what I was doing.

I started offering Coffee and Code meetings, since that was the common start for Free Code Camp-based groups. Ultimately, these ended up initially being half general meetings and half chit-chat on development topics. We wanted to start working on projects right away, and I wanted to be able to offer more interesting meeting content. I set up a set of chat rooms on Gitter and an organization account on GitHub to store code. Meanwhile, Meetup was proving not to be as useful for group communication as it was for advertising. I was not getting prompt notifications of messages and comments, the mailing list was outdated and a challenge to use properly, and it was difficult for members to know when information was posted to the message board. Change was needed again.

A Little More Conversation

As we found Meetup slowing down our group’s growth, I started looking into communication tools. We would ultimately need a website, but I didn’t seek to put one together in case our web developer members wanted to contribute to the original site. So while I registered a domain name for Midlothian Code on Google Domains, it was mainly so we could add Google’s G Suite tools to have a professional email and suite of office tools.

Instead of building out our website early on, we went with Mobilize.io, which allows for participating in discussion topics, RSVPing for events, and answering polls right from email. I could filter members by things like languages they were interested in and skills they currently had to ensure what we did better fit them.

Meanwhile, I utilized Tailor Brands to obtain an affordable logo for about $20 and some graphic design materials. Between my own code study, organizing the group, and taking care of my kids, I was having to get a little less DIY with my local group than I would have liked.

I also applied for room reservation privileges at my local library, which made our venue situation more secure. Libraries are often a neglected resource, but you should definitely take a look at what your library network offers if you’re starting a local group as it provides vital free services you’re unlikely to get anywhere else.

In a lot of ways, switching to Mobilize was an improvement. We had enough members learning JavaScript to start having JavaScript specific meetups, and I could gather opinions from a set of our members fairly well.

And yet, now we found our membership split with some still exclusively on Facebook, most exclusively on Meetup, and a few using the Mobilize site. It was difficult to coordinate members, to ensure information was up to date on all channels, and even to have an idea of how many of our more than 70 members were actually active. On top of this, as Midlothian Code was expanding, it was going to rapidly break the limits of Mobilize’s free plan and get too expensive to sustain.

A Home for Midlothian Code

It was becoming increasingly clear that getting some kind of coherent, functional website up for the group was going to need to take precedence over the pride of hand coding the site as a group. We needed one centralized place where the most up-to-date information and copies of any shared resources could be found. As much as I had tried to avoid it up until this point, it was time to look at WordPress.org.

I had already reserved a domain name for Midlothian Code, but Google Domains did not provide hosting, SSL certificates or any of the other services you might look for in a DNS provider. Instead, I sought out an inexpensive hosting provider to transfer the DNS service to. Choosing a web host is a big decision, but because our group is still a bit on the smaller side and not for profit, I sought out the least expensive provider that offered month-to-month payments. Do not go the route I did as I ended up using 1&1 Hosting, which is a terrible provider with basically no customer service. I actually set up hosting with a new company and migrated my site over in the length of time I spent on their customer service phone line waiting for a human being. I can comfortably recommend WP Engine for managed WordPress hosting and Digital Ocean for nearly everything else, otherwise, you’ll need to do some research. Make sure you can find quality documentation for your prospective host and more than one point of contact for service.

From there, I pieced together a site that would benefit Midlothian Code while we grow. I’m currently in the process of ensuring its features meet our needs, and once it is done, we will start scheduling and hosting coding workshops. I can’t wait to see how the group grows from here.

Takeaways

If I had to do things all over again, I would have started with Meetup and creating a group website from the beginning, and I would give no hesitation to using self-hosted WordPress for my group’s site. I would take the time to find a better host, though.

The attention that Meetup gives groups is valuable, although I would limit our use of it to a length of time that brings in new members since it doesn’t provide a satisfactory return on investment after that.

Building a site, meanwhile, is possible almost for free while the group is still small (I’m only paying for domain, inexpensive hosting, and email). The benefit of having one primary place to direct members and have them congregate can’t be overstated. Once members can be organized, that’s when they can really get to work on coded-from-scratch projects that can be highlighted on our site.

Are you thinking about starting a local coding group where you live? Share your concerns in the comments!

How to Deploy Your Twitter Bot (or Other Worker App) to Heroku

Ok, so we recently walked through getting started building a Node.js Twitter bot and then actually putting together functions to make it work. When we left off you had a really cool Twitter bot that acts automatically. Hopefully you’ve added some functions or features that really make it your own. Problem is, it’s not much fun to have a Twitter bot that only works when your terminal is open.

So now we’re going to take that awesome Twitter bot you made and deploy it to a cloud platform called Heroku. This will enable your bot to always be working in the background, and we’re going to do it for free.

At this point, you should have a Twitter bot that works when you run it locally. If you haven’t already, go ahead and commit and push your most recent changes to your repository.

$ git add .
$ git commit -m "some incredible commit message"
$ git push origin master

From here, we’re going to go ahead and create a new local branch, so just on your machine, and give it some <BRANCH-NAME>. Branching essentially lets us work with one copy of your code without disturbing or changing the main copy. Git allows us to create and switch to the branch with one command in the terminal:

$ git checkout -b <BRANCH-NAME>

Go ahead and open your .gitignore file and remove its reference to config.js. While you don’t want your authentication credentials being easily accessible on GitHub, you will need this file to deploy your app to Heroku.

Now, of course, if you don’t already have a Heroku account, you can set one up for free here.

Once you’re in and viewing your dashboard, create a new app, calling it whatever you like. Or leave it blank, and Heroku will come up with an interesting name for you.

Back to your terminal, you’ll need to download the Heroku Command Line Interface (CLI) if you haven’t already. Note: You will need Ruby installed, as Heroku was originally designed for Ruby apps. There are a number of ways to do this, depending on what version you would like.

Once it’s installed, you will have access to your Heroku apps right from your terminal. Log into your account by entering the following into your terminal and following the prompts:

$ heroku login

Now we’re going to connect your working directory with your Heroku app:

$ heroku git:remote -a <YOUR-APP-NAME>

And finally, we can deploy your application to Heroku. This should look very familiar to those who use Git and GitHub:

$ git add .
$ git commit -am "add project files"
$ git push heroku <BRANCH-NAME>:master

Hmm, something about that was different, though. Remember that we want our master branch on Heroku to include the config.js file, but we don’t want our master branch on GitHub to do so. So what we’re doing in that last command is to tell Heroku that our local branch ` is the master branch.

Now technically, your Twitter bot has been deployed to Heroku. You’ll notice, however, that if you look at your app dashboard online after a few minutes, it may say your app is sleeping. This is because Heroku assumes that if you build an app in Node.js, it will be a web app with a front-end for users to look at. It then uses that assumption to decide what kind of dynos (Linux containers) it thinks you need.

This can be solved a few ways:

Aside from adjusting it in the online GUI, one method is to create a Procfile (no extension, just that). Inside this file, we’ll give instructions to Heroku about what this app should be doing:

While I suggest including this file, when working with Heroku, it’s a good idea to know how to change what dynos are being used on your app, so that you can correct or scale them if needed. This can be done right from your terminal:

$ heroku ps:scale web=0 worker=1

After correcting the dynos or pushing changes, I usually restart the dyno I’m using. Heroku restarts your whole app after these changes, but sometimes working with the individual dyno keeps it from crashing on you:

$ heroku restart worker.1

And from there, your app should work just like it did locally. If you need, you can use terminal commands to check on your app, like this one to check the status of your dynos:

$ heroku ps

Or this one that shows your app console, any errors and shutdowns or restarts:

$ heroku logs

Go ahead and check out your awesome app!

What kind of bot or app did you deploy? Feel free to share in the comments!

How to Build a Twitter Bot in Node.js, Part 1: Functionality

This is the second part of a two-part series on creating a Twitter bot. In the first part, we reviewed setting up Twitter credentials for the bot, ensured we have Node and NPM available, and began working with our directory structure and Twitter API module. In this second part, we’ll go over using the API module to tweet, respond to tweets, follow and unfollow, and learn how to make the bot your own. Let’s get started!

Tweeting

At this point, we are beginning to use the Twit module to create tweets. Reviewing the Twit documentation, we can see that the first example uses a post method to create a new status – this is how we post a tweet.

Let’s take a look at what this might look like as a reusable function within our code:

var Twit = require('twit'); // Include Twit package
var config = require('./config'); // Include API keys
var T = new Twit(config);
var stream = T.stream('user'); // Set up user stream
function tweetIt(txt) { //Function to send tweets
  var tweet = { // Message of tweet to send out
    status: txt
  }
  T.post('statuses/update', tweet, tweeted); // Send the tweet, then run tweeted function
  function tweeted(err, data, response) { // Function to run after sending a tweet
    if (err) { // If error results
      console.log(err); // Print error to the console
    }
  }
}
tweetIt('Hello world!'); // Tweet "Hello world!"

This will allow us to pass as a parameter the content of your tweet to the function that will post the update. When the tweet attempt is made, we’re going to check for errors, and print them to the console if they exist. In the case of our function call, the parameter we pass is "Hello world!"

You can open your terminal and run the bot by typing nodejs bot.js (or node bot.js for non-Linux systems). If you do, your bot should tweet “Hello world!” Go ahead and check it out.

This is a fairly basic tweet, as it only provides whatever text we select ahead of time, but with this structure, you can create functions to make the value you pass into tweetIt() more dynamic.

In order to be able to have any kind of interactive Twitter bot, though, we need to use some event listeners and respond accordingly. Let’s check out our first one.

Responding to Tweets

The first event we’re going to listen for is the “tweet” event, which will trigger a function to handle any processing that will take place. Our code will look like this:

var Twit = require('twit'); // Include Twit package
var config = require('./config'); // Include API keys
var T = new Twit(config);
var stream = T.stream('user'); // Set up user stream
stream.on('tweet', tweetEvent); // Anytime a tweet enters the stream, trigger tweetEvent
function tweetEvent(eventMsg) { // Function to run on each tweet in the stream
  console.log(eventMsg);
}

Now, if you go into your terminal and restart the bot (type .exit to get out of the existing server before your nodejs bot.js command) and then tweet at your bot, your terminal console will fill with so much information that it can be hard to decipher. It contains things like details of who sent the message, any @mentions or geolocation information, information used to display a profile and other things. This information is actually very useful in creating a bot that can respond to tweets, so let’s try to make it look more legible:

var Twit = require('twit'); // Include Twit package
var config = require('./config'); // Include API keys
var T = new Twit(config);
var stream = T.stream('user'); // Set up user stream
stream.on('tweet', tweetEvent); // Anytime a tweet enters the stream, trigger tweetEvent
function tweetEvent(eventMsg) { // Function to run on each tweet in the stream
  var fs = require('fs'); // Include the File System module
  var json = JSON.stringify(eventMsg, null, 4); // Prettify the eventMsg
  fs.writeFile('tweet.json', json); // Write the prettified eventMsg to a local file
}

Now if you save your file, run the bot again, and tweet at it, a file will be created called tweet.json. In it, you’ll find what looks like a standard JSON file, and it becomes easier to tell what each piece of data actually is.

We’re going to need the user.screen_name, text, and in_reply_to_screen_name properties in particular:

var twit = require('twit'); // Include Twit package
var config = require('config'); // Include authentication credentials
var bot_screen_name = YOUR-BOT-NAME; // Set your bot's Twitter handle
var T = new Twit(config);
var stream = T.stream('user'); // Set up user stream
stream.on('tweet', tweetEvent); // Anytime a tweet enters the stream, trigger tweetEvent
function tweetEvent(eventMsg) { // Function to run on each tweet in the stream
  var from = eventMsg.user.screen_name; // Who sent the tweet
  var text = eventMsg.text; // Message of the tweet
  var reply_to = eventMsg.in_reply_to_screen_name; // Who tweet was @reply to
  if (from !== bot_screen_name) { // If bot didn't send the tweet
    text = text.replace(/[^a-zA-Z\s]/gi, '').toLowerCase(); // Remove non-letter characters and transform to lowercase
    var tweet_array = text.split(' '); // Create an array of each word in the tweet
    /*
    What to do to each tweet
    */
    if (reply_to !== null && reply_to === bot_screen_name) { // If the tweet was @reply to bot
      /*
      What to do to each @reply
      */
    }
  }
}

Let’s take a look at this for a moment. First is the addition of a new variable bot_screen_name. We’re going to include your bot’s handle in a few conditionals, so we may as well put it in a variable. In the body of the tweetEvent() function, we’re going to set who sent the tweet, what it actually said, and if the tweet was an @reply, who it was in response to. Then we’re going to check that the bot didn’t send the tweet. This seems silly, but bear in mind that your bot’s own tweets, follows and other events are a part of its stream. Inside that condition, we’re also going to check to see if the tweet is an @reply, and if so if it is directed at your bot. With these conditions we can create logic to handle any tweets by those your bot follows as well as handling @replies.

Before I go into an example of how I handled mine, we need to add the tweeting function we used before. Remember that it looked a bit like this:

function tweetIt(txt) { // Function to send tweets
  var tweet = { // Message of tweet to send out
    status: txt
  }
  T.post('status/update', tweet, tweeted); // Send the tweet, then run tweeted function
  function tweeted(err, data, response) { // Function to run after sending a tweet
    if (err) { // If error results
      console.log(err); // Print error to the console
    }
  }
}

Now to put it together, this is a simplified version of how I did my Mom Bot’s tweet logic:

var twit = require('twit'); // Include Twit package
var config = require('config'); // Include authentication credentials
var bot_screen_name = YOUR-BOT-NAME; // Set your bot's Twitter handle
var bad_words_list = require('badwords/array'); // Include Bad Words package
var unfollow_words_list = [
  'go away',
  /*
  ...
  */
  'leave me alone'
]
for (var k = 0; k < bad_words_list.length; k++ {
  bad_words_list[k] = bad_words_list[k].toLowerCase();
}
var T = new Twit(config);
var stream = T.stream('user'); // Set up user stream
stream.on('tweet', tweetEvent); // Anytime a tweet enters the stream, trigger tweetEvent
function tweetEvent(eventMsg) { // Function to run on each tweet in the stream
  var from = eventMsg.user.screen_name; // Who sent the tweet
  var text = eventMsg.text; // Message of the tweet
  var reply_to = eventMsg.in_reply_to_screen_name; // Who tweet was @reply to
  if (from !== bot_screen_name) { // If bot didn't send the tweet
    text = text.replace(/[^a-zA-Z\s]/gi, '').toLowerCase(); // Remove non-letter characters and transform to lowercase
    var tweet_array = text.split(' '); // Create an array of each word in the tweet
    for (var i = 0; i < tweet_array.length; i++) { // For each word in the tweet
      if (bad_words_list.indexOf(tweet_array[i]) != -1) { // If the word is in the bad words list
        tweetIt('@' + from + ' You tweeted a bad word! Mom Bot\'s not mad, she\'s just disappointed...'); // Bot tweets disappointment, example
      }
    }
    if (reply_to !== null && reply_to === bot_screen_name) { // If the tweet was @reply to bot
      for (var j = 0; j < unfollow_words_list.length; j++) { // For each word in the tweet
        if (text.indexOf(unfollow_words_list[j]) != -1) { // If an unfollow expression is in the tweet
          tweetIt('@' + from + ' Ok, I will leave you alone.'); // Tweet an unfollow response, example
        }
      }
    }
  }
}
function tweetIt(txt) { // Function to send tweets
  var tweet = { // Message of tweet to send out
    status: txt
  }
  T.post('status/update', tweet, tweeted); // Send the tweet, then run tweeted function
  function tweeted(err, data, response) { // Function to run after sending a tweet
    if (err) { // If error results
      console.log(err); // Print error to the console
    }
  }
}

Because of Mom Bot’s “personality” she will scan all tweets issued by followers for “bad words” and issue a warning to the user if they occur. For this, I used the Bad Words module to generate an array of words to check for.

In addition, she also scans @replies sent to her to see if they have any phrases set as unfollow-triggers. If she receives one, she will send a response that she will unfollow the user.

Of course, in my actual bot’s code, I have a function randomizing the responses Mom Bot gives so that she’s not constantly repeating herself. If you would like an example of the full code, you can always check out the source here.

Note, she’s not actually unfollowing them here. To do that, we’re going to need to learn just a bit more.

Follow and Unfollow on Command

Now if we check the Twit documentation again, it has a couple of examples of using post methods to create and destroy friendships. Those are the methods we need, as those will enable the bot to follow and unfollow users.

In our case, we’ll go ahead and follow each follower who follows the bot. It will look a bit like this:

var twit = require('twit'); // Include Twit package
var config = require('config'); // Include authentication credentials
var bot_screen_name = YOUR-BOT-NAME; // Set your bot's Twitter handle
var bad_words_list = require('badwords/array'); // Include Bad Words package
var unfollow_words_list = [
  'go away',
  /*
  ...
  */
  'leave me alone'
]
for (var k = 0; k < bad_words_list.length; k++ {
  bad_words_list[k] = bad_words_list[k].toLowerCase();
}
var T = new Twit(config);
var stream = T.stream('user'); // Set up user stream
stream.on('tweet', tweetEvent); // Anytime a tweet enters the stream, trigger tweetEvent
stream.on('follow', followed); // Anytime a user follows bot, trigger followed
function tweetEvent(eventMsg) { // Function to run on each tweet in the stream
  var from = eventMsg.user.screen_name; // Who sent the tweet
  var text = eventMsg.text; // Message of the tweet
  var reply_to = eventMsg.in_reply_to_screen_name; // Who tweet was @reply to
  if (from !== bot_screen_name) { // If bot didn't send the tweet
    text = text.replace(/[^a-zA-Z\s]/gi, '').toLowerCase(); // Remove non-letter characters and transform to lowercase
    var tweet_array = text.split(' '); // Create an array of each word in the tweet
    for (var i = 0; i < tweet_array.length; i++) { // For each word in the tweet
      if (bad_words_list.indexOf(tweet_array[i]) != -1) { // If the word is in the bad words list
        tweetIt('@' + from + ' You tweeted a bad word! Mom Bot\'s not mad, she\'s just disappointed...'); // Bot tweets disappointment, example
      }
    }
    if (reply_to !== null && reply_to === bot_screen_name) { // If the tweet was @reply to bot
      for (var j = 0; j < unfollow_words_list.length; j++) { // For each word in the tweet
        if (text.indexOf(unfollow_words_list[j]) != -1) { // If an unfollow expression is in the tweet
          tweetIt('@' + from + ' Ok, I will leave you alone.'); // Tweet an unfollow response, example
        }
      }
    }
  }
}
function followed(eventMsg) { // Function to run on follow event
  var follower_screen_name = eventMsg.source.screen_name; // Follower's screen name
  if (follower_screen_name !== bot_screen_name) { // If follower is not bot
    tweetIt('Thank you for following me!');
    T.post('friendships/create', { screen_name: follower_screen_name }, function(err, data, response) { // Follow the user back
      if (err) { // If error results
        console.log(err); // Print error to the console
      }
    });
  }
}
function tweetIt(txt) { // Function to send tweets
  var tweet = { // Message of tweet to send out
    status: txt
  }
  T.post('status/update', tweet, tweeted); // Send the tweet, then run tweeted function
  function tweeted(err, data, response) { // Function to run after sending a tweet
    if (err) { // If error results
      console.log(err); // Print error to the console
    }
  }
}

So here we’ve added an event listener to listen for follow events, which will trigger our followed() function. From there we check that the user doing the follow wasn’t the bot, similar to how we checked tweets. If the event passes our check, the bot will follow the user and print an error if one is thrown.

Let’s use what we’ve learned to actually unfollow the users we tweeted that we would earlier:

var twit = require('twit'); // Include Twit package
var config = require('config'); // Include authentication credentials
var bot_screen_name = YOUR-BOT-NAME; // Set your bot's Twitter handle
var bad_words_list = require('badwords/array'); // Include Bad Words package
var unfollow_words_list = [
  'go away',
  /*
  ...
  */
  'leave me alone'
]
for (var k = 0; k < bad_words_list.length; k++ {
  bad_words_list[k] = bad_words_list[k].toLowerCase();
}
var T = new Twit(config);
var stream = T.stream('user'); // Set up user stream
stream.on('tweet', tweetEvent); // Anytime a tweet enters the stream, trigger tweetEvent
stream.on('follow', followed); // Anytime a user follows bot, trigger followed
function tweetEvent(eventMsg) { // Function to run on each tweet in the stream
  var from = eventMsg.user.screen_name; // Who sent the tweet
  var text = eventMsg.text; // Message of the tweet
  var reply_to = eventMsg.in_reply_to_screen_name; // Who tweet was @reply to
  if (from !== bot_screen_name) { // If bot didn't send the tweet
    text = text.replace(/[^a-zA-Z\s]/gi, '').toLowerCase(); // Remove non-letter characters and transform to lowercase
    var tweet_array = text.split(' '); // Create an array of each word in the tweet
    for (var i = 0; i < tweet_array.length; i++) { // For each word in the tweet
      if (bad_words_list.indexOf(tweet_array[i]) != -1) { // If the word is in the bad words list
        tweetIt('@' + from + ' You tweeted a bad word! Mom Bot\'s not mad, she\'s just disappointed...'); // Bot tweets disappointment, example
      }
    }
    if (reply_to !== null && reply_to === bot_screen_name) { // If the tweet was @reply to bot
      for (var j = 0; j < unfollow_words_list.length; j++) { // For each word in the tweet
        if (text.indexOf(unfollow_words_list[j]) != -1) { // If an unfollow expression is in the tweet
          tweetIt('@' + from + ' Ok, I will leave you alone.'); // Tweet an unfollow response, example
          T.post('friendships/destroy', { screen_name: from }, function(err, data, response) { // Unfollow the user
            if (err) { // If error results
              console.log(err); // Print error to the console
            }
          });
        }
      }
    }
  }
}
function followed(eventMsg) { // Function to run on follow event
  var follower_screen_name = eventMsg.source.screen_name; // Follower's screen name
  if (follower_screen_name !== bot_screen_name) { // If follower is not bot
    tweetIt('Thank you for following me!');
    T.post('friendships/create', { screen_name: follower_screen_name }, function(err, data, response) { // Follow the user back
      if (err) { // If error results
        console.log(err); // Print error to the console
      }
    });
  }
}
function tweetIt(txt) { // Function to send tweets
  var tweet = { // Message of tweet to send out
    status: txt
  }
  T.post('status/update', tweet, tweeted); // Send the tweet, then run tweeted function
  function tweeted(err, data, response) { // Function to run after sending a tweet
    if (err) { // If error results
      console.log(err); // Print error to the console
    }
  }
}

Now at least the bot is being honest!

Make it Your Own

Now the bot can follow and unfollow independently, it can respond to tweets, and you can create its own tweets fairly easily. But it’s not terribly creative at this point since it basically follows what my bot does, and even then, it doesn’t do quite as much.

The full example of my bot is below, and as you can see, it includes additional responses and randomizes them. It will respond to @replies with certain “sad” or “proud” words. It also includes some instances where it will notify me if the bot isn’t able to do what’s expected.

console.log('The bot is starting...');
var Twit = require('twit'); // Include Twit Package
var config = require('./config'); // Include authentication credentials
var bot_name = 'Mom Bot';
var bot_screen_name = 'the_mother_bot';
var bot_owner_name = 'otherconsolelog';
var disappointed_bot_list = [ // What Bot will say when user tweets a bad word
  bot_name + "'s not mad; she's just disappointed. 😞 Would you like to delete that tweet?",
  "You're an adult, but do you really want to keep that tweet? 🤔 You can delete it if you don't want a future boss to see.",
  "Do you kiss your Mother Bot with that mouth? 😠 You can always delete that tweet if you think it was foolish.",
  "Sweetie, your tweet had a bad word! 😲 Are you sure that's what you want people to see? You can delete it if not."
];
var thank_you_list = [ // What Bot will say when user follows Bot
  "Thank you for keeping in touch with " + bot_name + ", Sweetie. 😘",
  "Don't tell your brother, but of course you're my favorite, Sweetie. 😍"
];
var unfollow_bot_list = [ // What Bot will say when Bot unfollows user
  "Ok Sweetie, " + bot_name + " will give you space. Follow me again if you want to talk. 😥",
  bot_name + " always loves you, but I\'ll leave you alone. Follow me again if you want to talk. 😥",
  "Ok Sweetie, " + bot_name + " will miss you, but I'm glad you're having a good time. 😥"
];
var feel_better_bot_list = [ // What Bot will say when user tweets about sadness
  "I'm happy simply because you exist. Just thought you should know that. 😘",
  "You are the light of my world and the first thing I think about every day. 😘",
  "Always remember that you are needed and there is work to be done. 😉",
  "Keep in mind that this, too, will pass. In the meantime, I am here for you in whatever way you need. 🤗"
];
var proud_bot_list = [ // What Bot will say when user tweets about something great/to be proud of
  "I'm so proud of you. And even if you weren't so fantastic, I'd still be proud. 😆",
  "I believe in you, Sweetie. 😘",
  "You are one of the best gifts I've ever gotten. I am so proud and humbled. 😊",
  "I feel so proud when I'm with you. 😊",
  "You have some real gifts! 😆",
  "It is so cool to watch you grow up. 😆",
  "You make me so happy just by being you. 😊",
  "I love you so much!😘",
  "You were born to do this! 😆"
];
var unfollow_words_list = [ // Words user can include to request Bot to unfollow the user
  "all your fault",
  "dont care",
  "dont have to",
  "dont need you",
  "go away",
  "hate you",
  "leave me alone",
  "not my mom",
  "not my real mom",
  "run away"
];
var sad_words_list = [ // Words user can include to request cheering up
  "blue",
  "blah",
  "crestfallen",
  "dejected",
  "depressed",
  "desolate",
  "despair",
  "despairing",
  "disconsolate",
  "dismal",
  "doleful",
  "down",
  "forlorn",
  "gloomy",
  "glum",
  "heartbroken",
  "inconsolable",
  "lonely",
  "melancholy",
  "miserable",
  "mournful",
  "sad",
  "sorrow",
  "sorrowful",
  "unhappy",
  "woebegone",
  "wretched"
];
var proud_words_list = [ // Words user can include to express happiness/pride
  "accomplished",
  "accomplishment",
  "amazing",
  "awesome",
  "cheering",
  "content",
  "delighted",
  "glad",
  "glorious",
  "good",
  "grand",
  "gratified",
  "gratifying",
  "happy",
  "heartwarming",
  "inspiring",
  "joyful",
  "magnificent",
  "memorable",
  "notable",
  "overjoyed",
  "pleased",
  "pleasing",
  "proud",
  "resplendent",
  "satisfied",
  "satisfying",
  "splendid",
  "succeeded",
  "success",
  "thrilled"
];
var bad_words_list = require('badwords/array'); // Include Bad Words package
for (k = 0; k < bad_words_list.length; k++) {
  bad_words_list[k] = bad_words_list[k].toLowerCase(); // Transform Bad Words list to all lowercase
}
var T = new Twit(config);
var stream = T.stream('user'); // Setting up a user stream
stream.on('tweet', tweetEvent); // Anytime a tweet enters the stream, run tweetEvent
stream.on('follow', followed); // Anytime a user follows Bot, run followed
console.log('Entering the stream.');
function tweetEvent(eventMsg) { // Function to run on each tweet in the stream
  var from = eventMsg.user.screen_name; // Who sent the tweet
  var text = eventMsg.text; // Message of the tweet
  var reply_to = eventMsg.in_reply_to_screen_name; // Who tweet was @reply to
  if (from !== bot_screen_name) { // If Bot didn't send the tweet
    console.log('Bot received a tweet.');
    text = text.replace(/[^a-zA-Z\s]/gi, "").toLowerCase(); // Remove non-letter characters and transform to lowercase
    var tweet_array = text.split(' '); // Create an array of each word in the tweet
    for (var i = 0; i < tweet_array.length; i++) { // For each word in the tweet
      if (bad_words_list.indexOf(tweet_array[i]) != -1) { // If the word is included in bad words list
        var disappointed_text = randomSaying(disappointed_bot_list);
        console.log('That tweet had a bad word!');
        tweetIt('@' + from + ' ' + disappointed_text); // Bot tweets her disappointment
      }
    }
    if (reply_to !== null && reply_to === bot_screen_name) { // If the tweet was @reply to Bot
      for (var j = 0; j < unfollow_words_list.length; j++) { // For each word in the unfollow list
        if (text.indexOf(unfollow_words_list[j]) != -1) { // If an unfollow word is in the tweet
          var unfollow_text = randomSaying(unfollow_bot_list);
          console.log('Someone wanted to unfollow.');
          tweetIt('@' + from + ' ' + unfollow_text); // Tweet an unfollow response
          T.post('friendships/destroy', { screen_name: from }, function(err, data, response) { // Unfollow the user
            if (err) { // If error results
              console.log(err); // Print error to the console
              tweetIt('@' + from + ' Something\'s wrong with ' + bot_name + '\'s computer. Ask @' + bot_owner_name + ' to help me unfollow you, please.'); // Tweet a request for user to contact Bot Owner
            }
          });
        }
      }
      for (var l = 0; l < tweet_array.length; l++) { // For each word in the tweet
        if ('stop'.indexOf(tweet_array[l]) != -1) { // If 'stop' is in the tweet
          console.log('Someone\'s having a problem.');
          tweetIt('@' + from + ' ' + bot_name + ' seems to be upsetting you. Please ask @' + bot_owner_name + ' for help.'); // Tweet a request for user to contact Bot Owner
        } else if (sad_words_list.indexOf(tweet_array[l]) != -1) { // If a sad word is in the tweet
          var feel_better_text = randomSaying(feel_better_bot_list);
          console.log('Someone needs cheering up.');
          tweetIt('@' + from + ' ' + feel_better_text); // Tweet to cheer the user up
        } else if (proud_words_list.indexOf(tweet_array[l]) != -1) { // If a proud word is in the tweet
          var proud_text = randomSaying(proud_bot_list);
          console.log('Someone did something awesome.');
          tweetIt('@' + from + ' ' + proud_text); // Tweet to be proud of the user
        }
      }
    }
  }
}
function followed(eventMsg) { // Function to run on follow event
  console.log('Someone followed the bot.');
  var name = eventMsg.source.name; // Who followed
  var follower_screen_name = eventMsg.source.screen_name; // Follower's screen name
  if (follower_screen_name !== bot_screen_name) { // If follower is not Bot
    var thank_you = randomSaying(thank_you_list)
    tweetIt('@' + follower_screen_name + ' ' + thank_you); // Tweet a thank you expression
    T.post('friendships/create', { screen_name: follower_screen_name }, function(err, data, response) { // Follow the user back
      if (err) { // If error results
        console.log(err); // Print error to the console
      }
    });
  }
}
function tweetIt(txt) { // Function to send tweets
  var tweet = { // Message of tweet to send out
    status: txt
  }
  T.post('statuses/update', tweet, tweeted); // Send the tweet, then run tweeted function
  function tweeted(err, data, response) { // Function to run after sending a tweet.
    if (err) { // If error results
      console.log(err); // Print error to the console
    }
  }
}
function randomSaying(sayingList) { // Function to randomize the expression to use
  var saying_number = Math.floor(Math.random()*sayingList.length); // Give a random number within number of available responses
  var saying = sayingList[saying_number]; // Grab expression matching that number
  return saying; // Return the expression
}

Go ahead and take some time to think about what kinds of things you would like your bot to do and how you might implement that logic.

Enjoy learning things visually? Daniel Shiffman, creator of Coding Rainbow has a fantastic YouTube video series you should check out that helped me a great deal with Twitter bots. If you feel extra awesome, you can help support his work on Patreon or buy one of his books, The Nature of Code: Simulating Natural Systems with Processing or Learning Processing, Second Edition: A Beginner’s Guide to Programming Images, Animation, and Interaction.

Let me know what kind of bot you’ve made, share it so I can follow it, or ask questions in the comments!