</table>
<p>Blërg does both by smashing the last two or three layers into one
-application. Blërg can be run as either a standalone web server, or as
-a CGI (FastCGI support is planned, but I just don't care right now).
-Less waste, more throughput. As a consequence of this, the entirety of
-the application logic that the user sees is implemented in the client
-app in Javascript. That's why all the URLs have #'s — the page is
-loaded once and switched on the fly to show different views, further
-reducing load on the server. Even parsing hash tags and URLs are done
-in client JS.
+application. Blërg can be run as either a standalone web server
+(currently deprecated because maintaining two versions is hard), or as a
+CGI (FastCGI support is planned, but I just don't care right now). Less
+waste, more throughput. As a consequence of this, the entirety of the
+application logic that the user sees is implemented in the client app in
+Javascript. That's why all the URLs have #'s — the page is loaded
+once and switched on the fly to show different views, further reducing
+load on the server. Even parsing hash tags and URLs are done in client
+JS.
<p>The API is simple and pragmatic. It's not entirely RESTful, but is
rather designed to work well with web-based front-ends. Client data is
Blërg has their own database, which consists of a metdata file, and one
or more data and index files. The data and index files are memory
mapped, which hopefully makes things more efficient by letting the OS
-handle when to read from disk (or maybe not &mdash I haven't benchmarked
-it). The index files are preallocated because I believe it's more
-efficient than writing to it 40 bytes at a time as records are added.
-The database's limits are reasonable:
+handle when to read from disk (or maybe not — I haven't
+benchmarked it). The index files are preallocated because I believe
+it's more efficient than writing to it 40 bytes at a time as records are
+added. The database's limits are reasonable:
<table class="statistics">
<tr><td>maximum record size</td><td>65535 bytes</td></tr>
-<tr><td>maximum number of records per database</td><td>2<sup>64</sup> - 1 bytes</td></tr>
+<tr><td>maximum number of records per database</td><td>2<sup>64</sup> - 1</td></tr>
<tr><td>maximum number of tags per record</td><td>1024</td></tr>
<table>
<p>So as not to create grossly huge and unwieldy data files, the
database layer splits data and index files into many "segments"
-containing at most 64K entries each. Those of you doing some quick math
-in your heads may note that this could cause a problem on 32-bit
-machines — if a full segment contains entries of the maximum
-length, you'll have to mmap 4GB (32-bit Linux gives each process only
-3GB of virtual address space). Right now, 32-bit users should change
+containing at most 64K entries each. Those of you doing some quick
+mental math may note that this could cause a problem on 32-bit machines
+— if a full segment contains entries of the maximum length, you'll
+have to mmap 4GB (32-bit Linux gives each process only 3GB of virtual
+address space). Right now, 32-bit users should change
<code>RECORDS_PER_SEGMENT</code> in <code>config.h</code> to something
lower like 32768. In the future, I might do something smart like not
mmaping the whole fracking file.