We’ve been running the Jobs Hub data collection service for close to a year now, and so far, it’s the only software we support that’s not open source. As of today, that is changing.
You can now run your very own version of our job board data collector for free by installing and running our open source job collection project. Here’s a little more about how the project works:
Why?
When I started work on JobApis, my goal was to standardize data from job boards. At some point, it became obvious that I’d need to collect this data from job boards and store it temporarily in a database in order to access it, analyze it, and learn more about the jobs being posted. The JobApis Collector is simply a wrapper around JobsMulti that allows you to dump jobs in Algolia as you collect them.
How it works
Unfortunately, there’s not an easy way to get every job from every job board supported by JobsMulti, so the approach this project takes is to use a list of search terms to search each job board. The terms can include a location and keyword as well as timestamps for when they were created, collected, and requested for collection. These objects are stored in Algolia:
|
|
Each time the collector runs, it gets the term that was least recently requested for collection and queues up a request to collect it. Each collection run uses JobsMulti to get all of the latest jobs from all the jobs boards supported and saves each to Algolia. The jobs use our standard job object, and look something like this:
|
|
The advantage to using Algolia is that the jobs can be very quickly accessed. You can configure your own indexes to make sure the data fields you need to search are front and center. The downside to Algolia is the cost as their minimum package price is around $60 per month.
Archiving jobs
Since most users don’t need jobs that are stale and I wanted to minimize my Algolia costs, I decided to archive jobs older than 30 days by saving them to Amazon S3. These collections are just JSON files, so they could be re-imported into Algolia via their web interface or API at any time. The archival command is configurable to run any number of days back that you’d like.
Setting up the project
I’ve been running the project in Docker, and have documented it in the repository on Github. I welcome any additional input for documentation or support in the Github issues, but my time to update them will likely be limited going forward.