Freelancers are a critical component to any fast-paced, lean tech startup.  Yet most startups seem to have no clue on how to interview them:  “Does their portfolio look impressive?  Yes.  Wow cool they worked with XYZ well-known company!   Spoke with them for a half hour, he/she is pretty impressive.  Let’s get them on board ASAP.”  Basically, the hiring decision is based on trust.  The risk is losing a week or two (and thousands of dollars) if they don’t work out.  Hire fast, fire fast.  This sucks but most of the time it’s true.

I’m adding this blog post to you, the aspiring rockstar freelancer.  I’m trying to say that people will barely even interview you properly before they decide to chose you over another person.  So interview the client *more* than they interview you and make client references a key part of your sales pitch.  Your references should sell you not only on your core skills but on your soft skills as well, and there’s one key soft skill that you will need to be proud of, it’s what separates the great freelancers from the average ones:

Responsiveness.

Over the years, I’ve noticed that the really amazing freelancers had an unbelievably prompt, clear, concise, and professional way of communicating via asynchronous means (email, IM, basically anything but face to face conversation).  If they were emailed even a long, daunting task or question, and they couldn’t deliver on it anytime soon, they’d promptly reply that #1, they acknowledge or have read the question, and #2, an estimate on when they can give a proper reply.  If someone sends them a meeting invite, they are quick to respond with a yes or no.  If someone asks team for a quick opinion on a new product, the freelancer is the first one to respond with a thoughtful answer.

Now, I’m not condoning checking email or IMs every two minutes.  But what I’m saying is that all too often, the really average freelancers out there take anywhere from 4 to 8 to even 24 hours to respond to the most basic questions.

Why is responsiveness so important?  Because the stakeholder is compensating someone to deliver the highest quality result in the most minimal time possible, in the most efficient way possible, consistently meeting deadlines they had promised.  Of course this is an exaggeration, but it’s less of one than you may think:  if the stakeholder is paying the big bucks for the best freelancer, you better believe they should demand prompt, responsive communication on top of everything else.

My point is that if you are any aspiring freelancer, developing your soft-skills should be a priority.   Make responsive, prompt communication a key quality of yours.

Nick Manning,

@seenickcode

You’ve Found a Great Portfolio, Now What?

1.  Employ a 5 Minute Screener Call

Don’t waste time having an hour or two discussion, only to find the person isn’t a right fit because of some otherwise easy to spot “no go” factor.  For example, one can talk for hours in an interview, only to finally disclose their rate is a no-go for your project budget.   You can also quiz the candidate on some basic design questions to make sure the person is legitimate.  If the screener call goes well, then schedule a proper follow up in-person conversation.

2.  Propose a One Week Trial Task

What you’re going to do is vet this freelancer with a small yet useful task, not a large project off the bat.  Make sure the requirements for this are clear, detailed, thought out and contribute to the project in some way.  This will allow you to vet the freelancer with as little risk as possible.  The task should take a week or less and should be of course compensated.  This is definitely worth the money.

3.  Are Project Proposals Genuine?

After vetting the candidate over a screener call and in person chat, present your trial task and set expectations.  Ask the candidate to write up a quick design proposal and evaluate them on how much care they put in to the actual proposal.  A lot of freelancers use canned or formulaic responses which is a red flag.

4.  Evaluate Soft Skills As Well

Evaluating whether a designer has the design aptitude and sense of style you’re comfortable with is a lot easier when basing it on a clear trial task.  The harder part is getting a sense of how they will communicate throughout a larger project.  For startups, being responsive and available is absolutely critical, since almost everything is time critical.

Within the week trial period:

  • Did they respond to simple emails promptly (within 12 – 18 hours)?
  • Did they make incremental progress or did they finish up the entire task at the very last minute?
  • Were you happy with the quality and effort of their responses to questions or requests?
  • Does the candidate enjoy working with you or is communication from them on “auto-pilot”?
  • Were they able to deliver on time?

5.  Set Expectations From Day 1

Quietly observe how well a candidate communicates and works with your team during the initial task.  If they pass the test and have started to work on a longer term project, you can now set some expectations.  For example, if you’ve been burned in the past by unresponsive freelancers, communicate this to the individual and try to get some commitments on availability.  For example, some people only work on weekends or just in the middle of the night or only Monday through Wednesday.  If you need someone available sevens days a week, this will become a problem.

As a bonus, try to enforce some process.  Have a status call on Monday and a demo on Friday and keep things consistent.  Agree on this process before the project starts.  With more expectations communicated (on both sides), the less disappointments there will be down the road!

I hope these tips can help you re-evaluate how your screen and interview candidates.  Having a clear process of vetting freelancers can save a huge amount of time and money.

Nick,

Twitter:  @seenickcode

Our Startup

Shindig is a mobile app (iOS, Android) that helps you explore new drinks and share them with the world. Take a picture of what you’re drinking, tag it with taste tags, share it, earn rewards and gameification points, follow famous mixologists and drink afficionados and search for the best drinks nearby.

Our team consists of two people, me (sole engineer) and my partner Harry (growth + business side).

We started off using:

  • Ruby on Rails
  • Postgres
  • Hosting costs: ~$100/mo

We then switched to:

  • Ruby on Rails
  • MongoDB
  • Redis
  • ElasticSearch
  • Heroku + a bunch of Heroku addons
  • Hosting costs: ~$350/mo (multiple environments)

We now use:

  • Ruby
  • Rack
  • Neo4j
  • Neo4j + Spatial
  • Go
  • Private VPS
  • Hosting costs: ~$60/mo (multiple environments)

Postgres to MongoDB

The effort it took switching to MongoDB was well worth it. I like MonogoDB because of it’s flexibility and schemaless design. You could perform reads on large amounts of data very fast. We decided to switch to MongoDB because of these reasons. We could also get new features out the door much faster. It also had a great Rails wrapper, so I never really needed to learn (nor did I want to spend the time) on how to write raw Mongo queries.

In summary, Mongo was:

  • flexible
  • easy to integrate with Rails
  • fast reads from single collections

Issues with MongoDB

After awhile, we started getting more and more users, who posted more and more types of content. Not only uploading drink photos but adding hash tags, venues and friend tags. All this content had to be shown in a feed, based on your posts as well as the posts of who you follow – a typical social app. The performance of feeds started getting worse and worse. MongoDB is not meant to be used as a relational database. We knew this when deciding to switch to this, but we’d take some steps to mitigate performance issues:

Solution #1, Denormalize

We addressed performance issues by dumping all posts into one single, time ordered Mongo collection. Nothing fancy, but the problem there was that our codebase and data model increased in complexity. Which sucks if you’re the only developer on the team. It sucks because you’re effectively managing two schemas, the original schema and then another collection with copied, flattened data. It sucks more when you have production issues you have to troubleshoot while trying to crank out new features.

Solution #2, Denormalize + Cache with Redis

I love Redis. It’s a non-bloated technology (a huge plus) with great documentation (another huge plus for startups) and a great community for support. We started using it for caching user-specific news feeds. It made our feeds load super fast. We then used it in many more areas in the app (gameification/leaderboards, other feeds, etc).

Complexity Kills

Our codebase became more and more complex now with a denormalized database and Redis. See, the downside to Redis is that if you don’t use it in a straightforward manner, when you do find a bug, reproducing then troubleshooting the issue is time consuming. Also, the more places you use Redis, the more code you have to manage. Code which decides when and how to cache data and when to invalidate the cache.

Complexity kills productivity, even if you’re just one developer. For a feature rich social network, managing all types of caches is royal pain in the ass.

Enter Neo4j

I started learning about Neo4j last year and realized the it was a great choice for social networks. So I decided to roll up my sleeves and switch our entire codebase to use Neo4j instead of MongoDB (the process being the topic of another blog post in the near future).

  1. We needed to quickly iterate and produce new features. Neo4j is a schemaless database. So, like Mongo, it was easy to alter and build our database schema. Pure flexibility.
  2. Like other startups, our data was diverse and interconnected. If you’re startup idea sounds simple now, wait a few months, it will soon have some social networking aspect to it coupled with 3rd party data you’re going to have to import/integrate. This is what our startup went through. More and more joins, more frequently. For Neo4j, relationships are are first class citizens, so loading something like a news feed was easy and quite fast.
  3. We wanted a simple persistence layer and our schema got much simpler. Making complex queries became straightfoward because our data model was so simple. No denormalization. We removed Redis. We removed ElasticSearch. Our codebase significantly shrank.

Summary

We now use Neo4j as our single database and it works brilliantly. Hosting is easy, backup/restores are straightforward and our codebase is clean and simple because we have a clearly defined data model without relying on denormalizing or complex caching mechanisms.

Eventually, once our app grows, we will have to use more technologies for search indexing or caching but for a modestly sized user base (we’re no Facebook of course), Neo4j works perfectly well on its own.

I think investing time in learning how to really leverage a graph database is and major asset for any full stack engineer. It’s easy to use for simple projects and if your project grows, it can cope with complexity and perform well.

(More blog posts to come on how I migrated our codebase as well as how we’re hosting Neo4j.)

I’d like to share with everyone how awesome CoreOS is for leveraging Docker.

We use this set up for our search proxy here at Swig. (a community for drink enthustiasts for iOS and Android)

Neo4j is used as our database and as stated in my previous blog post, it works brilliantly as a startup-friendly (read flexible) database that is catered .

(What’s a search proxy? It’s our term for a service that runs alongside our main API that tracks drink searches in the background and scrapes all kinds of metadata for the drinks we don’t yet have in our datbase.)

In this example, I’ll have a Neo4j database instance running in a Docker container. Now, I used to use Ubuntu for running our Docker containers but soon realized CoreOS made everything easier. CoreOS is a lean and mean flavor of linux that’s especially catered to Docker users offering easy scalability. It’s fast and their site has great documentation.

1. Droplet, Anyone?

I’m going use Digitial Ocean’s $5 droplet for this example. Sign up on their site and create a droplet (instance) with CoreOS as the operating system (screenshot below). I recommend setting up an SSH key ahead of time to use for sign in.

2. Me Like CoreOS

Sign into your droplet with ssh core@<IP of your droplet> (note that I registered my SSH key ahead of time for my digitial ocean account, so there’s no password).

Apart from its built-in Docker and easy clustering features, CoreOS offers something called systemd which makes working with Docker much more easy. More on this later.

3. Installing Neo4j 70 Seconds with Docker

Because Docker comes pre-installed with CoreOS, we will now download a Neo4j Docker image that I created myself.

docker pull seenickcode/neo4j-community

This “pulls” a prebuilt image of an entirely independently runnable virtual instance of linux with Java and the latest Neo4j Community Edition installed. If you’re interested, my Docker recipe is here.

While it downloads..

Interested in learning more about Docker? Here’s a concise overview is here. Or else, if you just want to follow along with this tutorial, feel free to read up on the following basic Docker functions like: ‘images’, ‘ps’, ‘build’ and ‘run’ here. Also, James Turnbull wrote a really solid book on Docker.

Honestly, it took me some time to get my head around Docker. Even after learning it, it takes time to get used to leveraging it in a realistic environment. Yet I think the payoff of simplicity, scalability and reliability it gives is worth the investment.

4. “Your Very Own, Cheese Pizz..”, er, Neo4j Instance

Now run..

docker run -d --name neo4j --privileged -p 7474:7474 -p 1337:1337 -v /home/core:/var/lib/neo4j/data seenickcode/neo4j-community  

What this does is it runs the seenickcode/neo4j-community image you downloaded, gives it a casual name neo4j (--name), exposes some ports to the outside world (-p) and finally maps (-p) the Neo4j data directory to your CoreOS home path (/home/core), so you can back it up on your own or load your own graph.db directory.

Now since we exposed Neo4j’s port 7474, we should be able to use the Neo4j Data Browser now at <yourIP>:7474.

Create a node or two with Neo4j: CREATE (n:Thing)

Now since we mapped a Docker volume to /home/core/, we should now see a graph.db directory there, cool.

5. Systemd Makes Docker a Lot Easier to Use

CoreOS’s systemd let’s us control which Docker containers start up, in what order as well as any other commands we want pre and post xyz. Also, if a container exits with an error, it can automatically restart that container.

In this example, we’ll specify our own systemd service for the Neo4j container we have. Here’s what it looks like:

[Unit]
Description=Neo4j Community  
//After=example-other-service-of-mine.service
Requires=docker.service

[Service]
TimeoutStartSec=0  
ExecStartPre=-/usr/bin/docker kill neo4j  
ExecStartPre=-/usr/bin/docker rm neo4j  
ExecStart=/usr/bin/docker run -d --name neo4j --privileged -p 7474:7474 -p 1337:1337 -v /home/core:/var/lib/neo4j/data seenickcode/neo4j-community

[Install]
WantedBy=multi-user.target  

This defines a service called neo4j.service and upon starting it, we’ll kill our running neo4j Docker container, remove it, then run it again.

You can create this as /etc/systemd/system/neo4j.service

You’ll then have to enable this using systemctl via sudo systemctl enable neo4j.service

Now ensure Neo4j is stopped (note that rm doesn’t remove the image itself, only the recently run container) via:

docker kill neo4j  
docker rm neo4j  

Then to start our systemd service sudo systemctl start neo4j.service

Just to be sure, run docker ps to ensure Neo4j was actually started.

We can tail our service via journalctl -f -u neo4j.service

6. Final Notes

Definitely check out the technologies Docker and CoreOS offer. I think they represent a sophisticated, forward-thinking approach to server/cluster management and it make sense to invest the time in getting familiar with them.

If you don’t know much about Neo4j, you will certainly one time or another want to get familiar with it in the next year or so. It’s a flexible, sophisticated NOSQL database that really excels in persisting and querying modern data. It’s becoming less and less of a niche “database for only graph data” and more of a solid choice for any NOSQL need.

Questions? Comments? Get in touch with me on Twitter via @seenickcode

Check out our new app, Swig, for iOS and Android. It’s a community for drink enthusiasts that runs on Neo4j!

Follow

Get every new post delivered to your Inbox.