ARGON
Documentation
Not logged in

MERCURY is the mechanism by which entities communicate with each other. An entity provides a number of "interfaces" via MERCURY, and other entities may access them.

By way of an introduction and some rationale, here's some blog thoughts behind the design decisions.

MERCURY handles the communication of such requests across a network. There's really three cases. When the target entity is in the same cluster as the originating entity, it might be able to handle the request on the local machine, in which case no network need be involved at all. If not, it can handle it elsewhere within the same cluster by using WOLFRAM's communications infrastructure. And if the request is for an entity in another cluster, it will have to use the dedicated inter-cluster MERCURY protocol, which runs over IRIDIUM on a different UDP port to WOLFRAM, for ease of firewalling.

However, the entities themselves need not worry about the distinctions; it's all abstracted away beneath the MERCURY model.

Entity IDs

Every entity is identified by an entity ID (EID), which is a binary object. It's not very meaningful to humans - CARBON provides a naming system so that humans need not be exposed to raw EIDs.

The full structure of an EID is:

  1. EID format version number
  2. Cluster ID

    1. Cluster master public key, or just a hash thereof.
    2. Cluster ID version number
    3. Cluster public key, or just a hash thereof.
    4. Optional signature of the cluster public key, against the cluster master public key.
    5. List of IPv4s and IPv6s of inter-cluster MERCURY server nodes, organised by priority and weighting in the manner of SRV records in DNS
  3. Cluster-local part of entity ID (variable-length string)
  4. Optional persona field

    1. Persona class (IRON symbol)
    2. Persona parameters (any IRON object)
  5. Optional list of operations on endpoints for this entity, tagged with transit security classifications, hashcash costs, and any other message requirements for each. This is not an exhaustive list of available endpoints, merely a hint for initial transit security requirements.
  6. Optional list of accepted encryption algorithms, each with a transit security clearance. This is compulsory if the previous optional section appears, so that the classifications (which are opaque numbers) can be mapped to available encryption algorithms.
  7. EID signature (hash of the entire EID apart from this bit, against the cluster's public key)

Note that all hashes are prefixed with a byte specifying the hash algorithm. Clearly, this is not a small data structure, either.

Verificiation

The validity of an EID can be confirmed by checking the EID signature; that checks for its internal validity, and that it was produced by the cluster with that public key. However, a cluster is really identified by its master public key hash rather than the cluster public key, so even that does not confirm the key has not been tampered with; if the signature of the cluster key is present, we can check that against the master key. In the worst case the EID might only contain the hashes of the cluster key and the cluster master key rather than the full keys, and no signature of the public key, in which case the EID signature can't be checked at all!

So what do we do? Well, every MERCURY server node in the cluster can be asked to provide the full content of both keys and the signature of the public key against the master key. So given an EID, one can find a node to provide this extra data, which can then be cached in the calling cluster so we don't need to request it again.

Entity IDs are generated when an entity asks for its own EID. The entity can request whether the EID should contain full keys or just hashes, and whether to include optional parts; and if it does not specify either way, cluster-wide configuration defaults will be used. Whether it's worth including them in the EID or not is really a tuning parameter, based on how likely it is that the EID will need to be verified.

Cluster ID changes

A cluster is allowed to generate a new public key whenever it wants. When it does so, it must increment the cluster ID version number, and then start issuing the new key with EIDs minted thereafter. Clients who have a stored EID include the version number with network requests when they contact that entity, and a new copy of the cluster ID will be bundled with the response if it's outdated. When entity IDs are stored in TUNGSTEN, the cluster ID is stripped off and stored only once per node, so the cluster ID can be quickly updated. This mechanism is also used if the list of addresses of MERCURY server nodes changes.

Failover

When MERCURY needs to make contact with an entity, it will use the published IP addresses. However, if the node it tries to contact does not respond, or responds with an error code indicating a node-local problem, it can try another in the specified priority order and with the specified weightings.

The client may choose to adjust the priorities and weightings a bit if it realises a particular node is particularly close in network terms, however, perhaps by having a high number of prefix bits in common between the remote and local node addresses.

The node might reply with an error including a more recent cluster ID. If so, then the cluster ID needs to be updated, and the search started again with the new cluster ID's address list.

Also, the node might reply with a specific redirect to another IP address, which is to then be tried. If that fails as well, then continue with the priority-based order; if the redirect included a newer cluster ID, then you have to start again from the beginning.

Interfaces and Endpoints

Each entity provides a number of MERCURY interfaces; each of these is identified by an IRON symbol, which is a name in the CARBON naming hierarchy. However, for purposes of compactness on the wire, each interface is mapped to a particular "endpoint", identified by a small integer, within the scope of a particular entity.

The mapping from interface symbols to endpoints is available in the CARBON metadata exposed by an entity (and as CARBON itself runs over MERCURY, this is bootstrapped with the hardcoded knowledge that CARBON is available on endpoint zero).

The symbolic name of an interface, if resolved in CARBON, leads to a protocol specification that identifies what operations are available on the endpoint, and tags each with a numeric identifier for use on the wire.

The operations within an endpoint are basically the same as IRIDIUM request types; they might be message operations (which are asynchronous message sinks), request operations (which produce a reply), or connection establishment (which look like a request, but if successful, returns a connection handle).

The protocol specification gives a type declaration for the messages and responses for each operation; this means that the IRON binary encoding used on the wire can elide outer type tags where applicable, as the type is statically known at both ends.

Connections deserve a little more detailled explanation. When you open a connection, both the client (that originates the request) and the server (that receives it) provide interfaces within the context of that connection. In other words, when you request a connection, you need to provide your own mapping of MERCURY interface identifiers to lists of LITHIUM handlers in your entity that will be made available for the server to invoke as callbacks. At the server end, to accept a request, the server must provide a mapping, too. These are like the interfaces published by entities, except that they are within the scope of the connection. The connection is assigned a unique identity on both the client and server, which is provided whenever the handlers are invoked, so the entities at each end can associate "connection state" with the connection identified if they need to.

Connections exist to give a stateful context for communications, and also as a vehicle to identify groups of MERCURY traffic so they can have bandwidth reservations applied, if necessary.

An important thing to realise about connections is that they exist between entities, not between nodes. A MERCURY connection might be created from one node to another, and this will generally cause an IRIDIUM virtual circuit to be created to handle the connection; but if either node goes down and the entities at each end of the connection try to use it, then it will be re-established to a different node (possibly losing its bandwidth reservation if it can't be re-established). The IRIDIUM VCs that support MERCURY connections are transient, and re-creatable on demand.

When protocols change, a new interface symbol must be chosen for the new version!

Personas

If a message is sent to an EID that contains a persona field, then the persona field is included with the request.

This allows an entity to publish lots of versions of its EID, with different persona fields, and be able to tell which one was used when requests come in.

Security

There are two main security concerns in MERCURY: Access control to entity endpoints, and transit security of the messages in flight from snoopers or tamperers.

Access control

For an entity to access an operation on an endpoint on another entity, the request must satisfy both mandatory (configured by the cluster security administrator) and discretionary (configured by the entity itself) access control, as well as anti-spam protection.

The accessing entity

Access control decisions are based around which entity is trying to do the accessing.

For requests between entities within the same cluster, we can trust MERCURY on the originating node to include this information, and we can trust WOLFRAM to protect it from tampering.

For requests from other clusters, we need to cryptographically validate the source EID included in the request, and reject the request if it is not correctly signed by the cluster public key.

That gives us the originating entity ID. The important parts of it from an access control perspective are the cluster master public key hash (which uniquely identifies the cluster), the cluster-local entity ID (which identifies the entity within the cluster), and the persona field (which identifies the persona the entity is adopting, if it has several). Everything else is unrelated to entity identity and can be ignored.

However, a request may also bear a certificate, stating the the originating entity ID is allowed to act on behalf of another entity ID until a specified absolute timestamp, signed by the cluster key of the other entity ID. If the originating entity ID matches the originating entity ID in the certificate, and the timestamp has not passed, then we can consider the request as having come from the other entity ID. However, we should still log the "real" originating entity ID as part of the audit trail, along with the other one!

In principle, a chain of such certificates could be presented with the request, all the way from the originating entity ID to a final other entity ID.

Mandatory access control

An entity has an access classification level attached by the cluster's security administrator.

Each endpoint has an access classification level attached by the cluster's security administrator.

Each operation in and endpoint has an access classification level attached by the cluster's security administrator.

The defaults, when new entities are created, are set by an algorithm supplied by the cluster's security administrator.

The clearance that an entity needs to even access an operation on an endpoint is the higher of the three classification levels.

When the accessing entity is within the same cluster, its clearance can simply be looked up.

When it's from another cluster, then the clearance might be looked up in a list of trusted clusters; either the specific entity will be listed, or a default for the cluster as a whole will be used. Either way, each remote cluster has a maximum clearance attached, and the lower clearance of that and what is found for the entity will be used.

Discretionary access control

Each entity also has an access control list attached to each operation in each endpoint, and to each endpoint, and to the entity as a whole.

The access control list is simply a list of entities that may access it, or alternatively, a blacklist of entities that may not access it.

However, the cluster configuration may list entities (potentially in other clusters) that are able to bypass ACLs altogether, and are always considered "allowed". If given a sufficient clearance, such an entity can do anything, which is a useful facility to have for people who accidentally remove themselves from the administration endpoint ACL for their entity.

Anti-Spam protection

Spamming is sending lots of unwanted requests via MERCURY, either to try and get attention, to hog a public service, or to try and deny service to others.

As such, alongside the usual access control lists on operations, endpoints, and the entity as a whole, a HashCash cost may be attached. Incoming messages that do not have a hashcash stamp showing sufficient proof of work will be rejected, and the error message will specify how much hashcash is required, before any other security checking is performed.

Transit security

When a request is being sent to a remote entity on another node, potentially in another cluster, or a response is being sent back, it must be protected against eavesdropping and tampering by the untrusted network. Both cases are considered as "a message" here.

This is represented as a classification level for the information in transit, which must be met by a clearance of the communication channel. The classification level for a request to, or a response from, an operation on an endpoint defaults to the access classification level of that endpoint, but it may be configured to be higher on a per-operation basis. For instance, a publicly accessible interface to buy things may be accessible by all, but the purchase requests (brimming with delivery addresses and payment details) should still be protected from snooping and tampering.

There is also a lower bound on the transit classification of messages coming from an entity (be they outgoing requests, or replies to requests that came to the entity). By default it is the clearance of the entity as a whole, but a security administrator may set it to be lower, thereby declaring that the entity is trusted to not leak classified information in its care to lower-cleared entities.

A minimum transit classification for a given operation may be specified by the entity, in which case it will reject requests with a lower classification (prompting the sender to try again with a more trusted communication path). And the sender may use a higher transit classification if they wish to. The entity ID may contain a suggested minimum transit classification for any given operation, to avoid an initial rejection.

Within the cluster

Within the cluster, transit security is relatively easy, and is the responsibility of WOLFRAM. If there is a communication group offering a protected path at the required clearance, then it can be used. Otherwise, an encryption algorithm offering the desired clearance can be chosen and used.

MERCURY will not allow a message with a transit classification higher than the clearance of the target entity to be sent.

Between clusters

Between clusters, classifications and clearances are not comparable - they are just numbers with cluster-specific interpretation. However, each cluster can express a classification or clearance by looking up what encryption algorithms it considers trusted at that level, and sending a list of encryption algorithm IDs. The receiving cluster can then work out the lowest clearance it assigns to any of those algorithms, and have a local equivalent of the clearance or classification.

Therefore, the suggested minimum transit clearances in EIDs are given with reference to lists of encryption algorithm codes, and if a message comes in with an encryption algorithm considered insufficiently trusted for the minimum clearance required, a list of acceptable algorithms is returned in the error message. The originating node chooses an algorithm from the intersection of that set and the set it considers adequate (ideally, the least expensive option...) and tries again.

Messages from other clusters are NOT rejected if the transit classification exceeds the entity's clearance - that check should be performed by the sender.

Handling incoming messages

Request routing

When the MERCURY server stack on a node receives an incoming request, it has a decision to make if the request is for a node that is not mirrored locally (which it can find out by asking WOLFRAM).

It could reply with a redirect to a public node that mirros the entity (if there is one).

It can redirect the request internally, using MERCURY over WOLFRAM to pass the request to a node that mirrors that entity.

Or it can run the request locally, causing access to the entity's state to all be done remotely by WOLFRAM to a node that carries the entity.

And regardless of which of the above it chooses, it may or may not perform security checks on the request before redirecting...

What to do should be configurable in the cluster entity!

Handling

An incoming message, be it to the primary MERCURY interface of an entity or to a connection interface, is handled by firing off an appropriate handler request to LITHIUM.

A message sent to the primary interface is mapped to a handler by looking in a particular section in the entity's TUNGSTEN state (/argon/mercury/entity-handlers). Therein are CARBON tuples mapping endpoint and operation numbers to LITHIUM handler IDs.

If the incoming request message contains a persona field, as the request was made to an EID with a persona field attached, then a different CARBON tuple is looked up - one which maps the persona class symbol, endpoint number, and operation number to a LITHIUM handler ID. This lets entities provide different interfaces on different personas.

The other part of the persona field (persona parameters) is simply passed to the handler as part of the request metadata.