X-Git-Url: http://git.bytex64.net/?a=blobdiff_plain;ds=sidebyside;f=www%2Fdoc%2Findex.html;h=6bc071e250aa31d0029239370e6f55a127bab152;hb=25d71dcb7110a90a83b287686ce27c7de450c619;hp=9b2e3dfb481bd0a44885245a26d4184483b80faa;hpb=46199d2fb2fdbfad11b3044bddaa817270c2f44f;p=blerg.git diff --git a/www/doc/index.html b/www/doc/index.html index 9b2e3df..6bc071e 100644 --- a/www/doc/index.html +++ b/www/doc/index.html @@ -36,6 +36,11 @@ C.
I know I'm gonna get shit for not using an autoconf-based system, but -I really didn't want to spend time figuring it out. You should edit -libs.mk and put in the paths where you can find headers and libraries -for the above requirements. +
There is now an experimental autoconf build system. If you run
+add-autoconf
, it'll do the magic and create a
+configure
script that'll do the familiar things. If I ever
+get around to distributing source packages, you should find that this
+has already been done.
+
+
If you'd rather stick with the manual system, you should edit libs.mk +and put in the paths where you can find headers and libraries for the +above requirements.
Also, further apologies to BSD folks — I've probably committed
several unconscious Linux-isms. It would not surprise me if the
@@ -101,6 +112,9 @@ made individually as well, if you, for example, don't want to install
the prerequisites for blerg.httpd
or
blerg.cgi
.
+
NOTE: blerg.httpd is deprecated and will not be +updated with new features. +
While it's not strictly required, Blërg will be easier to set up if
@@ -281,6 +295,68 @@ extra author
field, like so:
There is currently no support for getting more than 50 tags, but /tag will probably mutate to work like /get. +
POST to /subscribe/(user) with a username
parameter and
+an auth cookie, where (user) is the user whose updates you wish to
+subscribe to. The server will respond with JSON failure if the auth
+cookie is bad or if the user doesn't exist. The server will respond
+with JSON success after the subscription is successfully registered.
+
+
Identical to /subscribe, but removes the subscription. + +
POST to /feed, with a username
parameter and an auth
+cookie. The server will respond with a JSON list of the last 50 updates
+from all subscribed users, in reverse chronological order. Fetching
+/feed resets the new message count returned from /feedinfo.
+
+
NOTE: subscription notifications are only stored while subscriptions +are active. Any records inserted before or after a subscription is +active will not show up in /feed. + +
POST to /feedinfo with a username
parameter and an auth
+cookie to get general information about your subscribed feeds.
+Currently, this only tells you how many new records there are since the
+last time /feed was fetched. The server will respond with a JSON
+object:
+
+
+{"new":3} ++ +
POST to /feedinfo/(user) with a username
parameter and
+an auth cookie, where (user) is a user whose subscription status you are
+interested in. The server will respond with a simple JSON object:
+
+
+{"subscribed":true} ++ +
The value of "subscribed" will be either true or false depending on +the subscription status. + +
POST to /passwd with a username
parameter and an auth
+cookie, plus password
and new_password
+parameters to change the user's password. For extra protection,
+changing a password requires sending the user's current password in the
+password
parameter. If authentication is successful and
+the password matches, the user's password is set to
+new_password
and the server responds with JSON success.
+
+If the password doesn't match, or one of password
or
+new_password
are missing, the server returns JSON failure.
+
When I first started thinking about the idea of subscriptions, I +immediately came up with the naïve solution: keep a list of users to +which users are subscribed, then when you want to get updates, iterate +over the list and find the last entries for each user. And that would +work, but it's kind of costly in terms of disk I/O. I have to visit +each user in the list, retrieve their last few entries, and store them +somewhere else to be sorted later. And worse, that computation has to +be done every time a user checks their feed. As the number of users and +subscriptions grows, that will become a problem. + +
So instead, I thought about it the other way around. Instead of doing +all the work when the request is received, Blërg tries to do as much as +possible by "pushing" updates to subscribed users. You can think of it +kind of like a mail system. When a user posts new content, a +notification is "sent" out to each of that user's subscribers. Later, +when the subscribers want to see what's new, they simply check their +mailbox. Checking your mailbox is usually a lot more efficient than +going around and checking everyone's records yourself, even with the +overhead of the "mailman." + +
The "mailbox" is a subscription index, which is identical to a tag +index, but is a per-user construct. When a user posts a new record, a +subscription index record is written for every subscriber. It's a +similar amount of I/O as the naïve version above, but the important +difference is that it's only done once. Retrieving records for accounts +you're subscribed to is then as simple as reading your subscription +index and reading the associated records. This is hopefully less I/O +than the naïve version, since you're reading, at most, as many accounts +as you have records in the last N entries of your subscription index, +instead of all of them. And as an added bonus, since subscription index +records are added as posts are created, the subscription index is +automatically sorted by time! To support this "mail" architecture, we +also keep a list of subscribers and subscrib...ees in each account. +
Blërg probably doesn't actually work like Twitter because I've never