Blërg

Blërg is a minimalistic tagged text document database engine that also pretends to be a microblogging system. It is designed to efficiently store small (< 64K) pieces of text in a way that they can be quickly retrieved by record number or by querying for tags embedded in the text. Its native interface is HTTP — Blërg comes as either a standalone HTTP server, or a CGI. Blërg is written in pure C.

Installing

Getting the source

There's no stable release, yet, but you can get everything currently running on blerg.dominionofawesome.com by cloning the git repository at http://git.bytex64.net/blerg.git.

Requirements

Blërg has varying requirements depending on how you want to run it — as a standalone HTTP server, or as a CGI. You will need:

As a standalone HTTP, server, you will also need:

Or, as a CGI, you will need:

Configuring

I know I'm gonna get shit for not using an autoconf-based system, but I really didn't want to waste time figuring it out. You should edit libs.mk and put in the paths where you can find headers and libraries for the above requirements.

Also, further apologies to BSD folks — I've probably committed several unconscious Linux-isms. It would not surprise me if the makefile refuses to work with BSD make. If you have patches or suggestions on how to make Blërg more portable, I'd be happy to hear them.

Building

At this point, it should be gravy. Type 'make' and in a few seconds, you should have http_blerg, cgi_blerg, rss, and blergtool. Each of those can be made individually as well, if you, for example, don't want to install the prerequisites for http_blerg or cgi_blerg.

Installing

While it's not required, Blërg will be easier to set up if you configure it to work from the root of your website. For this reason, it's better to use a subdomain (i.e., blerg.yoursite.com is easier than yoursite.com/blerg/). If you do want to put it in a subdirectory, you will have to modify www/js/blerg.js and change baseURL at the top. The CGI version should work fine this way, but the HTTP version will require the request to be rewritten, as it expects to be serving from the root.

For the standalone web server:

Right now, http_blerg doesn't serve any static assets, so you're going to have to put it behind a real webserver like apache, lighttpd, nginx, or similar. Set the document root to the www directory, then proxy /info, /create, /login, /logout, /get, /tag, and /put to http_blerg.

For the CGI version:

Copy the files in www to the root of your web server. Copy cgi_blerg to blerg.cgi somewhere on your web server. Included in www-configs is a .htaccess file for apache that will rewrite the URLs. If you need to call cgi_blerg something other than blerg.cgi, the .htaccess file will need to be modified.

The extra RSS CGI

There is an optional RSS cgi (called simply rss) that will serve RSS feeds for users. Install this like the CGI version above (on my server, it's at /rss.cgi).

API

Blërg's API was designed to be as simple as possible. Data sent from the client is POSTed with the application/x-www-form-urlencoded encoding, and a successful response is always JSON. The API endpoints will be described as though the server were serving requests from the root of the wesite.

API Definitions

On failure, all API calls return either a standard HTTP error response, like 404 Not Found if a record or user doesn't exist, or a 200 response with some JSON indicating failure, which will look like this:

{"status": "failure"}

Blërg doesn't currently explain why there is a failure, and I'm not sure it ever will.

On success, you'll either get some JSON relating to your request (for /get, /tag, or /info), or a JSON object indicating success (for /create, /put, /login, or /logout), which looks like this:

{"status": "success"}

For the CGI backend, you may get a 500 error if something goes wrong. For the HTTP backend, you'll get nothing (since it will have crashed), or maybe a 502 Bad Gateway if you have it behind another web server.

All usernames must be 32 characters or less. Usernames must contain only the ASCII characters 0-9, A-Z, a-z, underscore (_), period (.), hyphen (-), single quote ('), and space ( ). Passwords can be at most 64 bytes, and have no limits on characters (but beware: if you have a null in the middle, it will stop checking there because I use strncmp(3) to compare).

Tags must be 64 characters or less, and can contain only the ASCII characters 0-9, A-Z, a-z, hyphen (-), and underscore (_).

/create - create a new user

To create a user, POST to /create with username and password parameters for the new user. The server will respond with failure if the user exists, or if the user can't be created for some other reason. The server will respond with success if the user is created.

/login - log in

POST to /login with the username and password parameters for an existing user. The server will respond with failure if the user does not exist or if the password is incorrect. On success, the server will respond with success, and will set a cookie named 'auth' that must be sent by the client when accessing restricted API functions (/put and /logout).

/logout - log out

POST to /logout with with username, the user to log out, along with the auth cookie in a Cookie header. The server will respond with failure if the user does not exist or if the auth cookie is bad. The server will respond with success after the user is successfully logged out.

/put - add a new record

POST to /put with username and data parameters, and an auth cookie. The server will respond with failure if the auth cookie is bad, if the user doesn't exist, or if data contains more than 65535 bytes after URL decoding. The server will respond with success after the record is successfully added.

/get/(user), /get/(user)/(start record)-(end record) - get records for a user

A GET request to /get/(user), where (user) is the user desired, will return the last 50 records for that user in a list of objects. The record objects look like this:

{
  "record":"0",
  "timestamp":1294309438,
  "data":"eatin a taco on fifth street"
}

record is the record number, timestamp is the UNIX epoch timestamp (i.e., the number of seconds since Jan 1 1970 00:00:00 GMT), and data is the content of the record. The record number is sent as a string because while Blërg supports record numbers up to 264 - 1, Javascript uses floating point for all its numbers, and can only support integers without truncation up to 253. This difference is largely academic, but I didn't want this problem to sneak up on anyone who is more insane than I am. :]

The second form, /get/(user)/(start record)-(end record), retrieves a specific range of records, from (start record) to (end record) inclusive. You can retrieve at most 100 records this way. If (end record) - (start record) specifies more than 100 records, the server will respond with JSON failure.

/info/(user) - Get information about a user

A GET request to /info/(user) will return a JSON object with information about the user (currently only the number of records). The info object looks like this:

{
  "record_count": "544"
}

Again, the record count is sent as a string for 64-bit safety.

/tag/(#|H|@)(tagname) - Retrieve records containing tags

A GET request to this endpoint will return the last 50 records associated with the given tag. The first character is either # or H for hashtags, or @ for mentions (I call them ref tags). You should URL encode the # or @, lest some servers complain at you. The H alias for # was created because Apache helpfully strips the fragment of a URL (everything from the # to the end) before handing it off to the CGI, even if the hash is URL encoded. The record objects also contain an extra author field, like so: { "author":"Jon", "record":"57", "timestamp":1294555793, "data":"I'm taking #garfield to the vet." }

There is currently no support for getting more than 50 tags, but /tag will probably mutate to work like /get.

Design

Motivation

Blërg was created as the result of a thought experiment: "What if Twitter didn't need thousands of servers? What if its millions of users could be handled by a single highly efficient server?" This is probably an unreachable goal due to the sheer amount of I/O, but we could certainly do better. Blërg was thus designed as a system with very simple requirements:

  1. Store and fetch small chunks of text efficiently
  2. Create fast indexes for hash tags and @ mentions
  3. Provide a HTTP interface web apps can use

And to further simplify, I didn't bother handling deletes, full text search, or more complicated tag searches. Blërg only does the basics.

Web App Stack

Classical model
Client App
HTML/Javascript
Webserver
Apache, lighttpd, nginx, etc.
Server App
Python, Perl, Ruby, etc.
Database
MySQL, PostgreSQL, MongoDB, CouchDB, etc.

Modern web applications have at least a four-layer approach. You have the client-side browser app written in HTML and Javascript, the web server, the server-side application typically written in some scripting language (or, if it's high-performance, ASP/Java/C/C++), and the database (usually SQL, but newer web apps seem to love object-oriented DBs).

Blërg model
Blërg Client App
HTML/Javascript
Blërg Database

Blërg compresses the last two or three layers into one application. Blërg can be run as either a standalone web server, or as a CGI (FastCGI support is planned, but I just don't care right now). Less waste, more throughput. As a consequence of this, the entirety of the application logic that the user sees is implemented in the client app in Javascript. That's why all the URLs have #'s — the page is loaded once and switched on the fly to show different views, further reducing load on the server. Even parsing hash tags and URLs are done in client JS.

The API is simple and pragmatic. It's not entirely RESTful, but is rather designed to work well with web-based front-ends. Client data is always POSTed with the usual application/x-www-form-urlencoded encoding, and server data is always returned in JSON format.

The HTTP interface to the database idea has already been done by CouchDB, though I didn't know that until after I wrote Blërg. :)

Database

Early in the design process, I decided to blatantly copy varnish and rely heavily on mmap for I/O. Each user in Blërg has their own database, which consists of one or more data and index files, and a metadata file. When a database is opened, only the metadata is actually read (currently a single 64-bit integer keeping track of the last record id). The data and index files are memory mapped, which hopefully makes things more efficient by letting the OS handle when to read from disk. The index files are preallocated because I believe it's more efficient than writing to it 40 bytes at a time as records are added. Here's some info on the database's limitations:

maximum record size65535 bytes
maximum number of records per database264 - 1 bytes
maximum number of tags per record1024

To provide support for 32-bit machines, and to not create grossly huge and unwieldy data files, the database layer splits data and index files into many "segments" containing at most 64K entries each. Those of you doing some quick math in your heads may note that this could cause a problem on 32-bit machines — if a full segment contains entries of the maximum length, you'll have to mmap 4GB (32-bit Linux gives each process only 3GB of virtual memory addressing). Right now, 32-bit users should change RECORDS_PER_SEGMENT in config.h to something lower like 32768. In the future, I might do something smart like not mmaping the whole fracking file.

A record is stored by first appending the data to the data file, then writing an index entry containing the offset and length of the data, as well as the timestamp, to the index file. Since each index entry is fixed length, we can find the index entry simply by multiplying the record number we want by the size of the index entry. Upshot: constant-time random-access reads and constant-time writes. As an added bonus, because we're using append-only files, we get lockless reads.

Tags are handled by a separate set of indices, one per tag. Each index record simply stores the user and record number. Tags are searched by opening the tag file, reading the last 50 entries or so, and then reading all the records listed. Voila, fast tag lookups.

At this point, you're probably thinking, "Is that it?" Yep, that's it. Blërg isn't revolutionary, it's just a system whose requirements were pared down until the implementation could be made dead simple.

Also, keeping with the style of modern object databases, I haven't implemented any data safety (har har). Blërg does not sync anything to disk before returning success. This should make Blërg extremely fast, and totally unreliable in a crash. But that's the way you want it, right? :]

Problems and Future Work

Blërg probably doesn't actually work like Twitter because I've never actually had a Twitter account.

I couldn't find a really good fast HTTP server library. Libmicrohttpd is small, but it's focused on embedded applications, so it often eschews speed for small memory footprint. This is especially apparent when you watch it chew through a POST request 300 bytes at a time even though you've specified a buffer size of 256K. Http_blerg is still pretty fast this way (on my 2GHz Opteron 246, siege says it serves a 690-byte /get request at about 945 transactions per second, average response time 0.05 seconds, with 100 concurrent accesses), but a high-efficiency HTTP server implementation could knock this out of the park.

Libmicrohttpd is also really difficult to work with. If you look at the code, http_blerg.c is about 70% longer than cgi_blerg.c simply because of all the iterator hoops I had to jump through to process POST requests. And if you can believe it, I wrote http_blerg.c first. If I'd done it the other way around, I probably would have given up on libmicrohttpd. :-/

The data structures written to disk are dependent on the size and endianness of the primitive data types on your architecture and OS. This means that the databases are not portable. A dump/import tool is probably the easiest way to handle this.

I do want to make a FastCGI version eventually, and this will probably be a rather simple modification of cgi_blerg.

Implementing deletes will be... interesting. There is room in the record index for a 'deleted' flag, but the problem is deleting any tags referenced in the data. This requires rescanning the record content and putting a 'deleted' flag in the tag indices. This will not be pretty, so I'm just going to ignore it and hope nobody makes any mistakes. ;]

Tag indices can grow arbitrarily large, which will cause problems for 32-bit machines around the 3GB mark. Still, that's something like 80 million tags, so maybe it's not something to worry about.

The API currently requires the client to transmit the user's password in the clear. A digest-based authentication scheme would be better, though for real security, the app should run over HTTPS.