X-Git-Url: http://git.bytex64.net/?a=blobdiff_plain;f=www%2Fdoc%2Findex.html;h=e98ef47f0e0804b0dbca756781828d5bc61731f4;hb=0595c4ad4f349efc7d907b5bc4ad774c231f2588;hp=21982bd108abaf79adb59c0f5b8dbfeda613f9a9;hpb=dfd9f2ccb8a86e20401c2d789bd4152786484024;p=blerg.git diff --git a/www/doc/index.html b/www/doc/index.html index 21982bd..e98ef47 100644 --- a/www/doc/index.html +++ b/www/doc/index.html @@ -39,7 +39,8 @@ C.
  • /subscribe/(user) - Subscribe to a user's updates
  • /unsubscribe/(user) - Unsubscribe from a user's updates
  • /feed - Get updates for subscribed users
  • -
  • /feedinfo/(user) - Get subscription status for a user
  • +
  • /feedinfo, /feedinfo/(user) - Get subscription status
  • +
  • /passwd - Change a user's password
  • Design @@ -86,10 +87,15 @@ sense of humor, requires ruby to compile)
  • Configuring

    -

    I know I'm gonna get shit for not using an autoconf-based system, but -I really didn't want to spend time figuring it out. You should edit -libs.mk and put in the paths where you can find headers and libraries -for the above requirements. +

    There is now an experimental autoconf build system. If you run +add-autoconf, it'll do the magic and create a +configure script that'll do the familiar things. If I ever +get around to distributing source packages, you should find that this +has already been done. + +

    If you'd rather stick with the manual system, you should edit libs.mk +and put in the paths where you can find headers and libraries for the +above requirements.

    Also, further apologies to BSD folks — I've probably committed several unconscious Linux-isms. It would not surprise me if the @@ -106,6 +112,9 @@ made individually as well, if you, for example, don't want to install the prerequisites for blerg.httpd or blerg.cgi. +

    NOTE: blerg.httpd is deprecated and will not be +updated with new features. +

    Installing

    While it's not strictly required, Blërg will be easier to set up if @@ -304,15 +313,26 @@ a user's updates

    POST to /feed, with a username parameter and an auth cookie. The server will respond with a JSON list of the last 50 updates -from all subscribed users, in reverse chronological order. +from all subscribed users, in reverse chronological order. Fetching +/feed resets the new message count returned from /feedinfo.

    NOTE: subscription notifications are only stored while subscriptions are active. Any records inserted before or after a subscription is active will not show up in /feed. -

    /feedinfo/(user) - Get subscription +

    /feedinfo, /feedinfo/(user) - Get subscription status for a user

    +

    POST to /feedinfo with a username parameter and an auth +cookie to get general information about your subscribed feeds. +Currently, this only tells you how many new records there are since the +last time /feed was fetched. The server will respond with a JSON +object: + +

    +{"new":3}
    +
    +

    POST to /feedinfo/(user) with a username parameter and an auth cookie, where (user) is a user whose subscription status you are interested in. The server will respond with a simple JSON object: @@ -324,6 +344,19 @@ interested in. The server will respond with a simple JSON object:

    The value of "subscribed" will be either true or false depending on the subscription status. +

    /passwd - Change a user's password

    + +

    POST to /passwd with a username parameter and an auth +cookie, plus password and new_password +parameters to change the user's password. For extra protection, +changing a password requires sending the user's current password in the +password parameter. If authentication is successful and +the password matches, the user's password is set to +new_password and the server responds with JSON success. + +If the password doesn't match, or one of password or +new_password are missing, the server returns JSON failure. +

    Design

    Motivation

    @@ -381,14 +414,15 @@ make the layers more efficient, or reduce the number of layers.

    Blërg does both by smashing the last two or three layers into one -application. Blërg can be run as either a standalone web server, or as -a CGI (FastCGI support is planned, but I just don't care right now). -Less waste, more throughput. As a consequence of this, the entirety of -the application logic that the user sees is implemented in the client -app in Javascript. That's why all the URLs have #'s — the page is -loaded once and switched on the fly to show different views, further -reducing load on the server. Even parsing hash tags and URLs are done -in client JS. +application. Blërg can be run as either a standalone web server +(currently deprecated because maintaining two versions is hard), or as a +CGI (FastCGI support is planned, but I just don't care right now). Less +waste, more throughput. As a consequence of this, the entirety of the +application logic that the user sees is implemented in the client app in +Javascript. That's why all the URLs have #'s — the page is loaded +once and switched on the fly to show different views, further reducing +load on the server. Even parsing hash tags and URLs are done in client +JS.

    The API is simple and pragmatic. It's not entirely RESTful, but is rather designed to work well with web-based front-ends. Client data is @@ -407,24 +441,24 @@ early in the design process that I'd try out mmaped I/O. Each user in Blërg has their own database, which consists of a metdata file, and one or more data and index files. The data and index files are memory mapped, which hopefully makes things more efficient by letting the OS -handle when to read from disk (or maybe not &mdash I haven't benchmarked -it). The index files are preallocated because I believe it's more -efficient than writing to it 40 bytes at a time as records are added. -The database's limits are reasonable: +handle when to read from disk (or maybe not — I haven't +benchmarked it). The index files are preallocated because I believe +it's more efficient than writing to it 40 bytes at a time as records are +added. The database's limits are reasonable: - +
    maximum record size65535 bytes
    maximum number of records per database264 - 1 bytes
    maximum number of records per database264 - 1
    maximum number of tags per record1024

    So as not to create grossly huge and unwieldy data files, the database layer splits data and index files into many "segments" -containing at most 64K entries each. Those of you doing some quick math -in your heads may note that this could cause a problem on 32-bit -machines — if a full segment contains entries of the maximum -length, you'll have to mmap 4GB (32-bit Linux gives each process only -3GB of virtual address space). Right now, 32-bit users should change +containing at most 64K entries each. Those of you doing some quick +mental math may note that this could cause a problem on 32-bit machines +— if a full segment contains entries of the maximum length, you'll +have to mmap 4GB (32-bit Linux gives each process only 3GB of virtual +address space). Right now, 32-bit users should change RECORDS_PER_SEGMENT in config.h to something lower like 32768. In the future, I might do something smart like not mmaping the whole fracking file.