LITHIUM is the dispatcher. From the perspective of an application programmer, it's the core of an ARGON system. LITHIUM is what actually invokes their code.


Entities are stored in TUNGSTEN; both their code and their current state. The code within an entity has entry points, known as handlers; the list of handlers is stored within the entity's state in TUNGSTEN. Every handler has an ID, and some metadata. Part of this metadata is for consumption by LITHIUM, and explains how to invoke the handler; the rest of the metadata is for consumption by the system component that asks LITHIUM to invoke the handler - generally MERCURY for handlers that provide services to remote clients over the network, CAESIUM for handlers that are periodically invoked according to a schedule, and WOLFRAM for distributed processing within the cluster.

The handlers are identified by symbolic IDs, and specified in a TUNGSTEN section called /argon/lithium/handlers. CARBON tuples attach executable code to a handler, and metadata for the HELIUM scheduler.

Handling a request

LITHIUM is given a request to invoke a specific handler in a specific entity, along with information from the requester about the HELIUM resource limits and scheduling priority to impose, and any parameters to be passed to the handler.

WOLFRAM is consulted to open a new transaction and then obtain access to the entity's state from TUNGSTEN. LITHIUM will not only be asked to invoke entities that are mirrored on the current node; in principle WOLFRAM might have to fetch entity state from another node in the cluster, but often requests will be routed to a node mirroring the entity for efficiency. LITHIUM need not be aware of that. But from the entity state, the handler metadata can be loaded from the entity.

A HELIUM thread is created with the scheduling details and resource limit allowance obtained by merging the request details and the handler's static details (which generally means taking the minimae), and associated with the entity context so that other ARGON kernel services can know what entity it relates to for access control and other decisions.

The code in that thread simply requests that the chosen programming language model for the handler (listed in its metadata), which will generally be CHROME although other options may be added in future, run the handler's code with the arguments supplied in the request. For CHROME, this will mean checking to see if a compiled copy of the handler is already in the cache and running it if so; or compiling it first and then running it.

When the thread completes, the WOLFRAM transaction is committed, unless it terminated with an error, in which case the transaction is aborted. A result callback, if provided in the request, is then invoked with the success or failure status of the request.

Special Cases

Some requests are handled differently to the above, however.

Requests to a special "node entity" (whose ID is hard coded into the node's configuration) are passed directly to NITROGEN for handling; no code is loaded from the entity.

CARBON read requests via MERCURY to entities (including the node entity) are handled specially, too. They are dispatched to CARBON for local handling, which allows the direct retrieval of static information, published via CARBON by that entity, from that entity's state in TUNGSTEN, without having to invoke any entity handlers with LITHIUM.

MERCURY requests to an entity's IODINE administrative protocol interface are also special-cased by passing them to WOLFRAM, so that entities cannot override them in any way. This is necessary so that a broken entity can be repaired.


An even more special special case would be when an entity is under debugging. The administrative interface can be used to specify that an entity is being debugged by a given MERCURY connection to the administrative interface.

LITHIUM does not directly worry about this, but the programming language handlers such as CHROME are responsible for noting the presence of debug mode and, if so, offering debugging facilities via the debug connection. Common infrastructure for managing the debugging connection is provided by ARGON itself.


Isolation between different entity's handlers and the kernel is the responsibility of the programming language module. The handler may only access resources under the guise of the entity, for access control and auditing purposes; the entity the handler is associated with is recorded in the state of the HELIUM thread so other kernel components can use it to make access control decisions. So the handler is given access to TUNGSTEN via WOLFRAM, but it can only access state within its own entity. It is given access to the MERCURY client, but it can only issue requests with its own entity ID as the originator (although there is the option to include a certificate giving the time-limited right for it to be treated as another entity for access-control purposes). It is given access to WOLFRAM's other functions, but only the ability to access global state such as the list of accessible nodes, and the ability to distribute parallel tasks to other nodes by requesting that job generators which invoke specified handlers on the same entity be registed with HELIUM on those nodes. It is given access to HELIUM, but only to access state of the context it is running in, and without the option of raising its own resource consumption limits or increasing its scheduling priority (except perhaps via an AURUM interface to "spend" resource credits into HELIUM).

In particular, since multiple handlers, and other system components, will all be running at once on the same entity, memory protection is paramount. Either the programming language must certify that code written within it cannot obtain a reference to an object other than objects it creates itself or are given references to by other parts of the system, or it must be run within some kind of restricted virtual machine and access the rest of the system via a hypercall interface, like POSIX processes.