Module executors

To allow loading, launching and executing modules, Sivelkiria introduces the concept of ‘executors’. Executors are modules as well. They assume the following responsibilities towards the modules they execute:

  1. Loading modules into RAM; initialization, finalization and unloading of modules.
  2. Linking the API provided by the operating system to the code of modules, ensuring passage of calls and data from the operating system API to the module code and back.
  3. Loading and preparing the runtime environment required by the module.
  4. Translating module code from any intermediate representation (source code of a script; byte code; intermediate language; native build intended for another platform) to a sequence of native instructions.  Any specific actions at this stage (interpreting the script, interpreting the byte code, JIT-compilation, emulation, etc.) are defined by the module’s delivery format.
  5. Hiding the method of module execution from the operating system and other modules.

The executor can also perform the task of module isolation provided that two modules co-existing in the same address space don’t cause a security breach (e.g. for managed code) and the executor is stable enough to avoid the faults of a module being executed impacting the executor itself and other modules it maintains.

This concept makes it possible to use differently built modules as part of a connected system. For example, a code compiled by a C++ compiler will be loaded by an executor supporting native code into a separate address space, and provided with the necessary runtime environment. Managed IL code can be loaded by an executor that supports the execution of managed code, and any isolation can be performed either by the operating system (by loading different modules into different address spaces) or by the executor itself (by loading different modules into different application domains of the same address space).

An exception would be the case of executing native code that runs under a host operating system as a set of libraries and / or processes. In this case, running native code directly is only allowed if either it is guaranteed that there are no calls from the code to the host operating system’s API bypassing Sivelkiria or if it is necessary for the module’s task. For example, this condition can be met in a controlled corporate environment, as well as in open source projects. However, modules that access the host operating system’s resources require a way to access  its API by definition. If the ‘purity’ of the native code can’t be guaranteed, such code can be run in emulation mode (just like any code built for a different platform).

In the case where Sivelkiria runs as a primary operating system, its kernel is responsible for isolating address spaces. In the case where it runs as a set of libraries and / or processes under a host operating system, modules can be loaded into different processes of a host OS to ensure isolation. When running in guest mode, system modules that are responsible for working with hardware (e.g. file system drivers) will be replaced with modules that emulate this behavior, thus hiding the difference from application modules.

Although it has been stated that Sivelkiria only allows data exchange between programs to be done via the API it provides, there are two exceptions.

The first exception is where the aforementioned runtime environment is being loaded directly into the module’s address space because it is required for the module to work correctly. The concept of dynamically linked libraries is generally not supported by Sivelkiria, because it is a form of re-using the code which is already implemented by module prototypes and data interfaces.

The second exception allows modules that are distributed together (e.g. in the same package) to move shared code into a shared library. This library can only be used by the modules that belong to the same package. However, dynamic library lookup is not supported.

On system start, some executors are loaded into RAM at the same time as the kernel is to avoid dependency loops that would not allow any of the executors to be loaded and started first. This mostly applies to native code executors. Other executors get loaded as per general rules.