X-Git-Url: http://git.bytex64.net/?a=blobdiff_plain;f=www%2Fdoc%2Findex.html;h=e98ef47f0e0804b0dbca756781828d5bc61731f4;hb=6778ec81726c7c5dc04ceef2ca68b410df890470;hp=6bc071e250aa31d0029239370e6f55a127bab152;hpb=0db9b1f8be6ebaac73c53f174d0b8527a09771d1;p=blerg.git diff --git a/www/doc/index.html b/www/doc/index.html index 6bc071e..e98ef47 100644 --- a/www/doc/index.html +++ b/www/doc/index.html @@ -414,14 +414,15 @@ make the layers more efficient, or reduce the number of layers.

Blërg does both by smashing the last two or three layers into one -application. Blërg can be run as either a standalone web server, or as -a CGI (FastCGI support is planned, but I just don't care right now). -Less waste, more throughput. As a consequence of this, the entirety of -the application logic that the user sees is implemented in the client -app in Javascript. That's why all the URLs have #'s — the page is -loaded once and switched on the fly to show different views, further -reducing load on the server. Even parsing hash tags and URLs are done -in client JS. +application. Blërg can be run as either a standalone web server +(currently deprecated because maintaining two versions is hard), or as a +CGI (FastCGI support is planned, but I just don't care right now). Less +waste, more throughput. As a consequence of this, the entirety of the +application logic that the user sees is implemented in the client app in +Javascript. That's why all the URLs have #'s — the page is loaded +once and switched on the fly to show different views, further reducing +load on the server. Even parsing hash tags and URLs are done in client +JS.

The API is simple and pragmatic. It's not entirely RESTful, but is rather designed to work well with web-based front-ends. Client data is @@ -440,24 +441,24 @@ early in the design process that I'd try out mmaped I/O. Each user in Blërg has their own database, which consists of a metdata file, and one or more data and index files. The data and index files are memory mapped, which hopefully makes things more efficient by letting the OS -handle when to read from disk (or maybe not &mdash I haven't benchmarked -it). The index files are preallocated because I believe it's more -efficient than writing to it 40 bytes at a time as records are added. -The database's limits are reasonable: +handle when to read from disk (or maybe not — I haven't +benchmarked it). The index files are preallocated because I believe +it's more efficient than writing to it 40 bytes at a time as records are +added. The database's limits are reasonable: - +
maximum record size65535 bytes
maximum number of records per database264 - 1 bytes
maximum number of records per database264 - 1
maximum number of tags per record1024

So as not to create grossly huge and unwieldy data files, the database layer splits data and index files into many "segments" -containing at most 64K entries each. Those of you doing some quick math -in your heads may note that this could cause a problem on 32-bit -machines — if a full segment contains entries of the maximum -length, you'll have to mmap 4GB (32-bit Linux gives each process only -3GB of virtual address space). Right now, 32-bit users should change +containing at most 64K entries each. Those of you doing some quick +mental math may note that this could cause a problem on 32-bit machines +— if a full segment contains entries of the maximum length, you'll +have to mmap 4GB (32-bit Linux gives each process only 3GB of virtual +address space). Right now, 32-bit users should change RECORDS_PER_SEGMENT in config.h to something lower like 32768. In the future, I might do something smart like not mmaping the whole fracking file.