Proposal: Gaia Hub Inboxes


#1

Problem

There’s many social applications where it’s important to be able to signal to another user. For example, in a messaging app, I want to send an “invite-to-chat” with someone I haven’t started a conversation with yet. The nature of decentralized storage (and Gaia) makes things like this non-trivial.

Proposal: Gaia Hub Inboxes

We would add an endpoint to gaia hubs, which is something like the following:

POST /inbox/${destinationAddress}/${fromAddress}

This authenticates with the fromAddress — meaning that the request requires a bearer token to be signed by the fromAddress. The maximum size of these POSTs will be very small (e.g., 160 bytes).

And then there is an associated read endpoint:

GET /inbox/${destinationAddress}?page=${page}

Which will return a paginated list of all received messages from the above endpoint, in the form:

[{ senderAddress: string, contents: string, receivedTime: timestamp }]

Ordered by received time. Furthermore, entries in this are unique by sender – meaning that a sender will only ever be able to keep 160 bytes in the inbox at a time.

Finally, there is also a DELETE endpoint to clear the inbox:

DELETE /inbox/${destinationAddress}?afterTime=${time}

The GET and DELETE endpoints both require authentication with the destinationAddress

Requirements on the Driver Model

The above spec will place one major new requirement on the driver model: The ability to list files / search.

This is virtually already a requirement. The way that the readUrl and writeUrl semantics work requires that a readUrl’s suffix correspond directly to the posted filename. In practice, systems which support that also support listing and searching by name (e.g., production key-value stores like Redis, memcached., cloud storage systems like azure blob storage, s3, and filesystems).

Example Use Case: Blog with Comments

A blog post with comments. Let’s say some user, Alice writes a blog post. The blog application wants to integrate comments from other users (Bob, Cathy) so that when people view the page, they see all the comments together, and only those comments which Alice’s client has integrated.

  1. Alice publishes blog post. Users read it from:
getFile('blog.json', { username: 'alice.id' })
  1. Bob and Cathy both write comments, and write them on their Gaia hubs.
putFile('alice-blog-comments.json', { encrypted: false })
  1. Bob and Cathy both signal to Alice that they left comments:
writeMessage('bob.id:alice-blog-comments.json', { username: 'alice.id' })
writeMessage('alice.id:alice-blog-comments.json', { username: 'alice.id' })
  1. Alice’s client, on next login (or with automated bot-in-a-box), gets the most recent items from her inbox, fetches the data, and integrates them into blog.json:
getMessages({after: lastLogin})
  .then((messages) =>
     Promise.all(messages.map(msg => {
        [author, filename] = msg.content.split(':')
        return getFile(filename, { username: author })
     }))
     .then(messageContents => {
        const allComments = schemaValidateSanitize(messageContents)
        return putFile('blog.json', { blogEntry: blogEntry,
                                      comments: allComments })
     }))

#4

Yes, 1000% yes! Notifications for shared files in Graphite has been a beating. This would be incredibly helpful.


#5

+1 with Stealthy moving to mobile, this will be very helpful to do “decentralized notifications”.

We are currently working around it with Firebase and informing users that it’s a convenience feature that’s centralized.

Just a side note, Status our biggest competitor, is proposing a ‘pay for notification’ model since they do most of their processing on-chain vs. off-chain.


#6

Excellent proposal Aaron. I can see many ways to use this functionality in Stealthy and other dApps.

Some questions for you:

  1. receivedTime is UTC and generated by the person leaving the message, implicit in the driver code, or some other mechanism?
  2. “a sender will only ever be able to keep 160 bytes in the inbox at a time” - what if the sender is using two different dApps that use this mechanism? (Will there be a way to tell the sender that the inbox is full? Will the 2nd message override the first? Is there a downside to upping the inbox / user to 10 x MaxBytes?)
  3. In the delete endpoint, what’s the typical use case for afterTime (i.e. I’m wondering why you wouldn’t always want to delete immediately?)
  4. Is there a max size for the inbox (i.e. n unique user messages)?

After reading this a couple of times, I think the biggest thing that sticks out to me is limiting a sender to 160bytes per inbox at a time. If I understand this correctly, it places constraints on applications in Social Media / commenting situations where I think a single user would want to leave multiple comments on a persons posts (i.e. multiple posts). There are workarounds though, i.e. polling time or having the message point to a file containing a complete list of notifications.

Thanks for posting this :+1:


#7
  1. receivedTime is UTC and generated by the person leaving the message, implicit in the driver code, or some other mechanism?

The sender would include the timestamp, but the Gaia hub’s UTC clock would be used to verify that requests aren’t too from too far in the past or the future (such requests would be NACK’ed).

“a sender will only ever be able to keep 160 bytes in the inbox at a time” - what if the sender is using two different dApps that use this mechanism? (Will there be a way to tell the sender that the inbox is full? Will the 2nd message override the first? Is there a downside to upping the inbox / user to 10 x MaxBytes?)

In this case, the sender would post the URL to the data in their Gaia hub. The receiver’s application would need to be smart enough to first get the notification, parse it, and then fetch the (much larger) data. This is the expected way to send large data to users, since the sender rightfully pays for hosting it. In general, It’s a bad idea to support large messages, since then it would be easy to DoS a Gaia hub with garbage messages and/or spam Gaia hubs.

The idea is that each (user, dapp) pair would have one notification slot. So, I could send a notification to you on Stealthy and on Graphite, and they’d be stored under different keys.

Within the context of a single dapp, my subsequent message would override my previous one.

In the delete endpoint, what’s the typical use case for afterTime (i.e. I’m wondering why you wouldn’t always want to delete immediately?)

I think the idea is to support batch deletion. The application client would be expected to pull out all outstanding notifications, process them, and then delete them in an efficient way. Fetching notifications and deleting them by timestamp range can be done with one RTT each.

Is there a max size for the inbox (i.e. n unique user messages)?

The theoretical maximum would be (message size) * (number of users) * (number of dapps). Regarding the number of dapps, the user would have the option to deny messages from dapps she does not use, and would have the option to whitelist/blacklist users.


#8

Thanks Jude, that clarifies things for me. Looking forward to working with this when it’s ready :+1:


#9

My thinking was that this would just be set by the gaia hub’s UTC clock.


#10

My thinking was that this would just be set by the gaia hub’s UTC clock.

Ah, that would work too. I assumed that the timestamp was part of the signed message payload.


#11

Addendum: Secret Notifications

It’d be nice to not expose the metadata of notifications publicly. If I receive a notification from the address 15GAGiT2j2F1EzZrvjk3B8vBCfwVEzQaZx, it’d be great if everyone in the world couldn’t easily deduce that information.

Instead of posting the message using that key, instead the sender would derive an unhardened child key with hash(destinationAddress, randomSalt) — the sender’s message would contain that salt, and that message would be encrypted (and signed) with the destination’s public key.

Requirements:

  1. Standard storage of a public key.

  2. Implementation of client-side verification/derivation path lookup.

Item 2 is tricky, but not impossible. Item 1 is something I think we’ve talked about for a while (discussed a little bit here: https://github.com/blockstack/blockstack.js/issues/381), but we never came to a conclusion.


#12

And here’s an issue for requesting a canonical location for app-specific public keys:


#13

Could such a system be compatible with W3C’s ActivityPub?

Also could such additions simply be configurable extensions that live on top of a Gaia Hub? E.g. I have an own Gaia Hub, and then I add an extension that adds ActivityPub features. This way the implementation of an inbox and the Gaia Hub itself could be very independent from each other.
With this approach we might even be able to use an already existing ActivityPub implementation :slight_smile:


#14

Possibly! I’ll have to look more into the spec for ActivityPubs — I think the spec is designed for the mechanism to serve as a primary communication channel, rather than just as a method to signal, but it may be amenable to this situation.

Possibly, though I think this would complicate the system quite a bit — the inbox extension would need to hold a private key associated with the gaia hub storage bucket it is writing to, and then the user would have to communicate a different URL for the inbox. That’s all possible, but I think would result in a more finicky system (and require that the an inbox extension hold private keys for each inbox it served).


#15

So something similar to bcrypt?


How would something like a forum page work on Blockstack?
#16

Very excited about this feature!