All Projects → andsmedeiros → uevloop

andsmedeiros / uevloop

Licence: MIT license
A fast and lightweight event loop for embedded platforms.

Programming Languages

c
50402 projects - #5 most used programming language
Makefile
30231 projects
shell
77523 projects

Projects that are alternatives of or similar to uevloop

node-cqrs-saga
Node-cqrs-saga is a node.js module that helps to implement the sagas in cqrs. It can be very useful as domain component if you work with (d)ddd, cqrs, eventdenormalizer, host, etc.
Stars: ✭ 59 (-3.28%)
Mutual labels:  events
arm-none-eabi-gcc-xpack
A binary xPack with the Arm Embedded GCC toolchain
Stars: ✭ 85 (+39.34%)
Mutual labels:  embedded
arm synth
Wavetable Synth Running on an STM32F 32-bit ARM Cortex M3 microprocessor
Stars: ✭ 23 (-62.3%)
Mutual labels:  embedded
transceiver
Channel based event bus with request/reply pattern, using promises. For node & browser.
Stars: ✭ 25 (-59.02%)
Mutual labels:  events
ewok-kernel
A secure and high performances microkernel for building secure MCU-based IoTs
Stars: ✭ 69 (+13.11%)
Mutual labels:  embedded
rust-sysfs-pwm
Linux PWM Access via Sysfs in Rust
Stars: ✭ 42 (-31.15%)
Mutual labels:  embedded
from-event
🦊 ViewChild and FromEvent — a Match Made in Angular Heaven
Stars: ✭ 130 (+113.11%)
Mutual labels:  events
Embedded UKF Library
A compact Unscented Kalman Filter (UKF) library for Teensy4/Arduino system (or any real time embedded system in general)
Stars: ✭ 31 (-49.18%)
Mutual labels:  embedded
ofxVui
✨ Such Wow! View + GUI System for openFrameworks ✨
Stars: ✭ 48 (-21.31%)
Mutual labels:  events
cowl
A lightweight C/C++ library for working with Web Ontology Language (OWL) ontologies
Stars: ✭ 18 (-70.49%)
Mutual labels:  embedded
watermill-sql
SQL Pub/Sub for the Watermill project.
Stars: ✭ 39 (-36.07%)
Mutual labels:  events
ember-event-helpers
Complimentary event template helpers to the {{on}} modifier
Stars: ✭ 33 (-45.9%)
Mutual labels:  events
tomodachi
💻 Microservice library / framework using Python's asyncio event loop with full support for HTTP + WebSockets, AWS SNS+SQS, RabbitMQ / AMQP, middleware, etc. Extendable for GraphQL, protobuf, gRPC, among other technologies.
Stars: ✭ 170 (+178.69%)
Mutual labels:  events
event
📆 Strictly typed event emitter with asynciterator support
Stars: ✭ 30 (-50.82%)
Mutual labels:  events
wwwtf.berlin
wwwtf, a ~week of events organized for and by the web community
Stars: ✭ 46 (-24.59%)
Mutual labels:  events
pg-pubsub
Reliable PostgreSQL LISTEN/NOTIFY with inter-process lock support
Stars: ✭ 50 (-18.03%)
Mutual labels:  events
uart
serial com api
Stars: ✭ 37 (-39.34%)
Mutual labels:  embedded
FreeRTOS-rust
Rust crate for FreeRTOS
Stars: ✭ 159 (+160.66%)
Mutual labels:  embedded
django-eth-events
No description or website provided.
Stars: ✭ 43 (-29.51%)
Mutual labels:  events
Evilize
Parses Windows event logs files based on SANS Poster
Stars: ✭ 24 (-60.66%)
Mutual labels:  events

µEvLoop C/C++ CI

A fast and lightweight event loop aimed at embedded platforms in C99.

About

µEvLoop is a microframework build around a lightweight event loop. It provides the programmer the building blocks to put together async, interrupt-based systems.

µEvLoop is loosely inspired on the Javascript event loop and aims to provide a similar programming model. Many similar concepts, such as events and closures are included. It is aimed at environments with very restricted resources, but can be run on all kinds of platforms.

DISCLAIMER

µEvLoop is in its early days and the API may change at any moment for now. Although it's well tested, use it with caution. Anyway, feedback is most welcome.

Highlights

  • As minimalist and loose-coupled as possible.
  • Does not allocate any dynamic memory on its own. All memory needed is statically allocated either explicitly by the user or implicitly by containers.
  • Small memory footprint and runtime latency.
  • Does not try to make assumptions about the underlying system.
  • Extremely portable and conforming to ISO C99.
  • Depends only on a very small subset of the standard libc, mostly for fixed size integers and booleans.
  • Allows for excellent execution predictability and ease of debugging.
  • Can be used baremetal or alongside RTOSes.
  • Well tested and well documented.

API documentation

The API documentation is automatically generated by Doxygen. Find it here.

Testing

Tests are written using a simple set of macros. To run them, execute make test.

Please note that the makefile shipped is meant to be run in modern Linux systems. Right now, it makes use of bash commands and utilities as well as expects libSegFault.so to be in a hardcoded path.

If this doesn't fit your needs, edit it as necessary.

Test coverage

To generate code coverage reports, run make coverage. This requires gcov, lcov and genhtml to be on your PATH. After running, the results can be found on uevloop/coverage/index.html.

Core data structures

These data structures are used across the whole framework. They can also be used by the programmer in userspace as required.

All core data structures are unsafe. Be sure to wrap access to them in critical sections if you mean to share them amongst contexts asynchronous to each other.

Closures

A closure is an object that binds a function to some context. When invoked with arbitrary parameters, the bound function is called with both context and parameters available. With closures, some very powerful programming patterns, as functional composition, become way easier to implement.

Closures are very light and it is often useful to pass them around by value.

Basic closure usage

#include <stdint.h>
#include <uevloop/utils/closure.h>

static void *add(void *context, void *params){
    uintptr_t value1 = (uintptr_t)context;
    uintptr_t value2 = (uintptr_t)params;

    return (void *)(value1 + value2);
}

// ...

// Binds the function `add` to the context (5)
uel_closure_t add_five = uel_closure_create(&add, (void *)5);

// Invokes the closure with the parameters set to (2)
uintptr_t result = (uintptr_t)uel_closure_invoke(&add_five, (void *)2);
// Result is 7

A word on (void *)

Closures take the context and parameters as a void pointers and return the same. This is meant to make possible to pass and return complex objects from them.

At many times, however, the programmer may find the values passed/returned are small and simple (i.e.: smaller than a pointer). If so, it is absolutely valid to cast from/to a uintptr_t or other data type known to be at most the size of a pointer. The above example does that to avoid creating unnecessary object pools or allocating dynamic memory.

Circular queues

Circular queues are fast FIFO (first-in-first-out) structures that rely on a pair of indices to maintain state. As the indices are moved forward on push/pop operations, the data itself is not moved at all.

The size of µEvLoop's circular queues are required to be powers of two, so it is possible to use fast modulo-2 arithmetic. As such, on queue creation, the size must be provided in its log2 form.

FORGETTING TO SUPPLY THE QUEUE'S SIZE IN LOG2 FORM MAY CAUSE THE STATIC ALLOCATION OF GIANT MEMORY POOLS

Basic circular queue usage

#include <stdint.h>
#include <uevloop/utils/circular-queue.h>

#define BUFFER_SIZE_LOG2N   (5)
#define BUFFER_SIZE         (1<<BUFFER_SIZE_LOG2N)  // 1<<5 == 2**5 == 32

// ...
uel_cqueue_t queue;
void *buffer[BUFFER_SIZE];
// Creates a queue with 32 (2**5) slots
uel_cqueue_init(&queue, buffer, BUFFER_SIZE_LOG2N);

// Push items in the queue
uel_cqueue_push(&queue, (void *)3)
uel_cqueue_push(&queue, (void *)2)
uel_cqueue_push(&queue, (void *)1);

// Pop items from the queue
uintptr_t value1 = (uintptr_t) uel_cqueue_pop(&queue); // value1 is 3
uintptr_t value2 = (uintptr_t) uel_cqueue_pop(&queue); // value2 is 2
uintptr_t value3 = (uintptr_t) uel_cqueue_pop(&queue); // value3 is 1

Circular queues store void pointers. As it is the case with closures, this make possible to store complex objects within the queue, but often typecasting to an smaller value type is more useful.

Object pools

On embedded systems, hardware resources such as processing power or RAM memory are often very limited. As a consequence, dynamic memory management can become very expensive in both aspects.

Object pools are statically allocated arrays of objects whose addresses are stored in a queue. Whenever the programmer needs a object in runtime, instead of dynamically allocating memory, it is possible to simply pop an object pointer from the pool and use it away.

Because object pools are statically allocated and backed by circular queues, they are very manageable and fast to operate.

Basic object pool usage

#include <stdint.h>
#include <uevloop/utils/object-pool.h>

typedef struct obj obj_t;
struct obj {
    uint32_t num;
    char str[32];
    // Whatever
};

// ...

// The log2 of our pool size.
#define POOL_SIZE_LOG2N   (5)
UEL_DECLARE_OBJPOOL_BUFFERS(obj_t, POOL_SIZE_LOG2N, my_pool);

uel_objpool_t my_pool;
uel_objpool_init(&my_pool, POOL_SIZE_LOG2N, sizeof(obj_t), UEL_OBJPOOL_BUFFERS(my_pool));
// my_pool now is a pool with 32 (2**5) obj_t

// ...

// Whenever the programmer needs a fresh obj_t
obj_t *obj = (obj_t *)uel_objpool_acquire(&my_pool);

// When it is no longer needed, return it to the pool
uel_objpool_release(&my_pool, obj);

Linked lists

µEvLoop ships a simple linked list implementation that holds void pointers, as usual.

Basic linked list usage

#include <stddef.h>
#include <stdint.h>
#include <uevloop/utils/linked-list.h>

// ...

uel_llist_t list;
uel_llist_init(&list);

uel_llist_node_t nodes[2] = {
    {(void *)1, NULL},
    {(void *)2, NULL}
};

// Push items into the list
uel_llist_push_head(&list, &nodes[0]);
uel_llist_push_head(&list, &nodes[1]);

// List now is TAIL-> [1]-> [2]-> NULL. HEAD-> [2]
uel_llist_node_t *node1 = (uel_llist_node_t *)llist_pop_tail(&list);
uel_llist_node_t *node2 = (uel_llist_node_t *)llist_pop_tail(&list);

//node1 == nodes[0] and node2 == nodes[1]

Containers

Containers are objects that encapsulate declaration, initialisation and manipulation of core data structures used by the framework.

They also encapsulates manipulation of these data structures inside critical sections, ensuring safe access to shared resources across the system.

System pools

The syspools component is a container for the system internal object pools. It contains pools for events and linked list nodes used by the core components.

The system pools component is meant to be internally operated only. The only responsibility of the programmer is to allocate, initialise and provide it to other core components.

To configure the size of each pool created, edit include/uevloop/config.h.

System pools usage

#include <uevloop/system/containers/system-pools.h>

// ...

uel_syspools_t pools;
uel_syspools_init(&pools);
// This allocates two pools:
//   1) pools.event_pool
//   2) pools.llist_node_pool

System queues

The sysqueues component contains the necessary queues for sharing data amongst the core components. It holds queues for events in differing statuses.

As is the case with system pools, the sysqueues component should not be directly operated by the programmer, except for declaration and initialisation.

Configure the size of each queue created in include/uevloop/config.h.

System queues usage

#include <uevloop/system/containers/system-queues.h>

// ...

uel_sysqueues_t queues;
uel_sysqueues_init(&queues);
// This allocates two queues:
//   1) queues.event_queue (events ready to be processed are put here)
//   2) queues.schedule_queue (events ready to be scheduled are put here)

Application

The application component is a convenient top-level container for all the internals of an µEvLoop'd app. It is not necessary at all but contains much of the boilerplate in a typical application.

It also proxies functions to the event loop and scheduler components, serving as a single point entry for the system operation.

The following code is a realistic minimal setup of the framework.

#include <uevloop/system/containers/application.h>
#include <stdint.h>

static volatile uint32_t counter = 0;
static uel_application_t my_app;

// 1 kHz timer
void my_timer_isr(){
    my_timer_isr_flag = 0;
    uel_app_update_timer(&my_app, ++counter);
}

int main (int argc, char *argv[]){
    uel_app_init(&my_app);

    // From here, the programmer can:
    // - Schedule timers with `uel_app_run_later` or `uel_app_run_at_intervals`
    // - Enqueue closures with `uel_app_enqueue_closure`
    // - Set up observers with `uel_app_observe`
    // - Listen for signals set at other places

    while(1){  
        uel_app_tick(&my_app);
    }

    return 0;
}

Application registry

The application component can also keep a registry of modules to manage. See Appendix A: Modules for more information.

Core components

Scheduler

The scheduler is a component that keeps track of current execution time and closures to be run in the future. It provides similar functionality to the setTimeout and setInterval Javascript functions.

Two queues lead in and out of it: the inbound schedule_queue is externally fed events that should be scheduled and then accounted for; the outbound event_queue hold events that are due to be collected and processed.

This component needs access to system's pools and queues.

Basic scheduler initialisation

#include <uevloop/system/containers/system-pools.h>
#include <uevloop/system/containers/system-queues.h>
#include <uevloop/system/scheduler.h>

// ...

// Create the system containers
uel_syspools_t pools;
uel_syspools_init(&pools);
uel_sysqueues_t queues;
uel_sysqueues_init(&queues);

// Create the scheduler
uel_scheduer_t scheduler;
uel_sch_init(&scheduler, &pools, &queues);

Scheduler operation

The scheduler component accepts input of closures and scheduling info an then turns it into a timer event. This timer is then inserted in a timer list, which is sorted by each timer's due time.

#include <stdio.h>
#include <stdint.h>
#include <uevloop/utils/closure.h>

static void *print_coords(void *context, void *params){
    uintptr_t x = (uintptr_t)context;
    uintptr_t y = (uintptr_t)params;
    printf("(x: %d, y: %d)\n", x, y);

    return NULL;
}

// ...

uel_closure_t  print_x_one = uel_closure_create(&print_coords, (void *)1);
uel_closure_t  print_x_two = uel_closure_create(&print_coords, (void *)2);
uel_closure_t  print_x_three = uel_closure_create(&print_coords, (void *)3);

// Schedules to run 1000ms in the future.
// Will print (x: 1, y: 4)
uel_sch_run_later(&scheduler, 1000, print_x_one, (void *)4);

// Schedules to run at intervals of 500ms, runs the first time after 500ms
// Will print (x: 2, y: 5)
uel_sch_run_at_intervals(&scheduler, 500, false, print_x_two, (void *)5);

// Schedules to run at intervals of 300ms, runs the first time the next runloop
// Will print (x: 3, y: 6)
uel_sch_run_at_intervals(&scheduler, 300, true, print_x_three, (void *)6);

The scheduler must be fed regularly to work. It needs both an update on the running time as an stimulus to process enqueued timers. Ideally, a hardware timer will be consistently incrementing a counter and feeding it at an ISR while in the main loop the scheduler is oriented to process its queue.

// millisecond counter
volatile uint32_t counter = 0;

// 1kHz timer ISR
void my_timer_isr(){
  	my_timer_isr_flag = 0;
  	uel_sch_update_timer(&scheduler, ++counter);
}

// ...

// On the main loop
uel_sch_manage_timers(&scheduler);

When the function uel_sch_manage_timers is called, two things happen:

  1. The schedule_queue is flushed and every timer in it is scheduled accordingly;
  2. The scheduler iterates over the scheduled timer list from the beginning and breaks it when it finds a timer scheduled further in the future. It then proceeds to move each timer from the extracted list to the event_queue, where they will be further collected and processed.

Timer events

Events are messages passed amongst the system internals that coordinate what tasks are to be run, when and in which order. Usually, the programmer don't have to interact directly with events, being timer events and observers the only exceptions to this. The functions uel_sch_run_later and uel_sch_run_at_intervals return a uel_event_t *. With this handle, it is possible to pause and resume or even completely cancel a timer event.

#include <stddef.h>

uel_event_t *timer = uel_sch_run_at_intervals(&scheduler, 100, false, print_one, NULL);

// The event will be put on a hold queue in the scheduler
uel_event_timer_pause(timer);  

// The event will be rescheduled on the scheduler
uel_event_timer_resume(timer);

// The event will be ignored by the scheduler and destroyed at the `event loop`
uel_event_timer_cancel(timer);

When pausing and resuming timer events, be aware of the internal's latencies: paused timers are only sent to the hold queue when their scheduled time is hit. Also, when resumed, they are scheduled based solely on their period setting, being the elapsed time when they were paused completely ignored. Should a timer both scheduled and paused be resumed before its elapsed time is hit, it behaves as it was never paused. Regarding cancelled timer events, they are equally susceptible to internal latency as they will only be destroyed when processed by the event loop. However, cancelled timers are not meant to be reused anyway. As a rule of thumb, never use a timer event after it was cancelled.

Scheduler time resolution

There are two distinct factors that will determine the actual time resolution of the scheduler:

  1. the frequency of the feed in timer ISR
  2. the frequency the function uel_sch_manage_timers is called.

The basic resolution variable is the feed-in timer frequency. Having this update too sporadically will cause events scheduled to differing moments to be indistinguishable regarding their schedule (e.g.: most of the time, having the timer increment every 100ms will make any events scheduled to moments with less than 100ms of difference to each other to be run in the same runloop).

A good value for the timer ISR frequency is usually between 1 kHz - 200 Hz, but depending on project requirements and available resources it can be less. Down to around 10 Hz is still valid, but precision will start to deteriorate quickly from here on.

There is little use having the feed-in timer ISR run at more than 1 kHz, as it is meant to measure milliseconds. Software timers are unlikely to be accurate enough for much greater frequencies anyway.

If the uel_sch_manage_timers function is not called frequently enough, events will start enqueuing and won't be served in time. Just make sure it is called when the counter is updated or when there are events on the schedule queue.

Event loop

The central piece of µEvLoop (even its name is a bloody reference to it) is the event loop, a queue of events to be processed sequentially. It is not aware of the execution time and simply process all enqueued events when run. Most heavy work in the system happens here.

The event loop requires access to system's internal pools and queues.

Basic event loop initialisation

#include <uevloop/system/containers/system-pools.h>
#include <uevloop/system/containers/system-queues.h>
#include <uevloop/system/event-loop.h>

// Create system containers
uel_syspools_t pools;
uel_syspools_init(&pools);
uel_sysqueues_t queues;
uel_sysqueues_init(&queues);

// Create the event loop
uel_evloop_t loop;
uel_evloop_init(&loop, &pools, &queues);

Event loop usage

The event loop is mean to behave as a run-to-completion task scheduler. Its uel_evloop_run function should be called as often as possible as to minimise execution latency. Each execution of uel_evloop_run is called a runloop .

The only way the programmer interacts with it, besides creation / initialisation, is by enqueuing hand-tailored closures directly, but other system components operate on the event loop behind the stage.

Any closure can be enqueued multiple times.

#include <uevloop/utils/closure.h>

static void *add(void *context, void *params){
    uintptr_t *num = (uintptr_t *)context;
  	uintptr_t other = (uintptr_t)params;
  	*num += other;

  	return NULL;
}

// ...

uintptr_t value = 0;
uel_closure_t closure = uel_closure_create(&add, (void *)&value);

uel_evloop_enqueue_closure(&loop, &closure, (void *)1);
// value is 0

uel_evloop_run(&loop);  
// value is 1

uel_evloop_enqueue_closure(&loop, &closure, (void *)2);
uel_evloop_enqueue_closure(&loop, &closure, (void *)3);
uel_evloop_run(&loop);
// value is 6

WARNING! uel_evloop_run is the single most important function within µEvLoop. Almost every other core component depends on the event loop and if this function is not called, the loop won't work at all. Don't ever let it starve.

Observers

The event loop can be instructed to observe some arbitrary volatile value and react to changes in it.

Because observers are completely passive, they are ideal for triggering side-effects from ISRs without any latency. However, each observer set does incur extra latency during runloops, as the observed value must be continuously polled.

static volatile uintptr_t adc_reading = 0;

void my_adc_isr(){
    adc_reading  = my_adc_buffer;
    my_adc_isr_flag = 0;
}

static void *process_adc_reading(void *context, void *params){
    uintptr_t value = (uintptr_t)params;
    // Do something with `value`

    return NULL;
}
uel_closure_t processor =
    uel_closure_create(process_adc_reading, NULL);

// This ensures each runloop the `adc_reading` variable is polled and, in case
// of changes to it, the `processor` closure is called with its new value as
// parameter.
uel_event_t *observer = uel_evloop_observe(&loop, &adc_reading, &processor);

// When an observer isn't needed anymore, it can be disposed of to release any
// used system resources.
// **DON'T** use an observer after it has been cancelled.
uel_event_observer_cancel(observer).

Signal

Signals are similar to events in Javascript. It allows the programmer to message distant parts of the system to communicate with each other in a pub/sub fashion.

At the centre of the signal system is the Signal Relay, a structure that bind specific signals to its listeners. When a signal is emitted, the relay will asynchronously run each listener registered for that signal. If the listener was not recurring, it will be destroyed upon execution by the event loop.

Signals and relay initialisation

To use signals, the programmer must first define what signals will be available in a particular relay, then create the relay bound to this signals.

To be initialised, the relay must have access to the system's internal pools and queues. The programmer will also need to supply it a buffer of linked lists, where listeners will be stored.

#include <uevloop/system/containers/system-pools.h>
#include <uevloop/system/containers/system-queues.h>
#include <uevloop/system/signal.h>
#include <uevloop/utils/linked-list.h>

// Create the system containers
uel_syspools_t pools;
uel_syspools_init(&pools);
uel_sysqueues_t queues;
uel_sysqueues_init(&queues);

// Define what signals will be available to this relay.
// Doing so in an enum makes it easy to add new signals in the future.
enum my_component_signals {
   SIGNAL_1 = 0,
   SIGNAL_2,
   // New events would go here
   SIGNAL_COUNT
};

// Declare the relay buffer. Note this array will be the number of signals large.
uel_llist_t buffer[SIGNAL_COUNT];

// Create the relay
uel_signal_relay_t relay;
uel_signal_relay_init(&relay, &pools, &queues, buffer, SIGNAL_COUNT);

Signal operation

// This is the listener function.
static void *respond_to_signal(void *context, void *params){
  uintptr_t num = (uintptr_t)context;
  // Signals can be emitted with parameters, just like events in JS
  char c = (char)(uintptr_t)params;
  printf("%d%c\n", num, c);

  return NULL;
}

// Listeners can be persistent. They will fire once each time the signal is emitted
uel_closure_t respond_to_signal_1 = uel_closure_create(&respond_to_signal, (void *)1);
uel_signal_listener_t listener_1 = uel_signal_listen(SIGNAL_1, &relay, &respond_to_signal_1);

// Listeners can also be transient, so they fire ust on first emission
uel_closure_t respond_to_signal_2 = uel_closure_create(&respond_to_signal, (void *)2);
uel_signal_listener_t listener_2 = uel_signal_listen_once(SIGNAL_2, &relay, &respond_to_signal_2);

// ...
uel_signal_emit(SIGNAL_1, &relay, (void *)('a')); // prints 1a
uel_signal_emit(SIGNAL_2, &relay, (void *)('b')); // prints 2b
uel_signal_emit(SIGNAL_1, &relay, (void *)('c')); // prints 1c
uel_signal_emit(SIGNAL_2, &relay, (void *)('d')); // doesn't print anything

Please note the listener function will not be executed immediately, despite what this last snippet can lead to believe. Internally, each closure will be sent to the event loop and only when it runs will the closures be invoked.

You can also unlisten for events. This will prevent the listener returned by a uel_signal_listen() or uel_signal_listen_once() operation to have its closure invoked when the event loop performs the next runloop. Additionally, said listener will be removed from the signal vector on such opportunity.

uel_signal_unlisten(listener_1, &relay);
uel_signal_unlisten(listener_2, &relay);  // This has no effect because the listener
                                          // for SIGNAL_2 has already been marked as unlistened

Appendix A: Promises

Promises are data structures that bind an asynchronous operation to the possible execution paths that derive from its result. They are heavily inspired by Javascript promises.

Promises allow for very clean asynchronous code and exceptional error handling.

Promise stores

All promises must be created at a store, to where they will come back once destroyed. A promise store encapsulates object pools for promises and segments, the two composing pieces for promise operation.

Promise store creation

Promise store need access to two object pools, one for promises and one for segments.

#include <uevloop/utils/object-pools.h>
#include <uevloop/system/promise.h>

#define PROMISE_STORE_SIZE_LOG2N    4   // 16 promises
#define SEGMENT_STORE_SIZE_LOG2N    6   // 64 segments

uel_objpool_t promise_pool;
uel_objpool_t segment_pool;
UEL_DECLARE_OBJPOOL_BUFFERS(uel_promise_t, PROMISE_STORE_SIZE_LOG2N, promise);
UEL_DECLARE_OBJPOOL_BUFFERS(uel_promise_segment_t, SEGMENT_STORE_SIZE_LOG2N, segment);
uel_objpool_init(
    &promise_pool,
    PROMISE_STORE_SIZE_LOG2N,
    sizeof(uel_promise_t),
    UEL_OBJPOOL_BUFFERS(promise)
);
uel_objpool_init(
    &segment_pool,
    SEGMENT_STORE_SIZE_LOG2N,
    sizeof(uel_promise_segment_t),
    UEL_OBJPOOL_BUFFERS(segment)
);

uel_promise_store_t store = uel_promise_store_create(&promise_pool, &segment_pool);

Promises and segments

As mentioned above, promises and segments are the two building blocks for composing asynchronous chains of events. Promises represent the asynchronous operation per se and segments are the synchronous processing that occurs when a promise settles.

Settling a promise means transitioning it into either resolved or rejected states which respectively indicate success or error of the asynchronous operation, optionally assigning a meaningful value to the promise.

Promise creation

There are two necessary pieces for creating a promise: a store and a constructor closure that starts the asynchronous operation.

void *start_some_async(void *context, void *params){
    uel_promise_t *promise = (uel_promise_t *)params;
    // ...
    return NULL; // return value is ignored
}

// ...

uel_promise_t *promise =
    uel_promise_create(&store, uel_closure_create(start_some_async, NULL));

When the promise is created, start_some_async is invoked immediately, taking the promise pointer as parameter.

On creation, promises are in the pending state. This means its asynchronous operation has not been completed yet and the promise does not hold any meaningful value.

Promise settling

Once the operation is completed (and this can also be synchronously done from inside the constructor closure), there are two functions for signalling either success or failure of the asynchronous operation:

uel_promise_resolve(promise1, (void *)SOME_VALUE); // operation succeeded
uel_promise_reject(promise2, (void *)SOME_ERROR);  // operation failed

Once a promise is settled, it holds a value that can be accessed via promise->value.

Segments

Segments represent the synchronous phase that follows a promise settling. They contain two closures, one for handling resolved promises and one for handling rejected promises. Either one is chosen at runtime, depending on the settled state of the promise, and is invoked with the promise as parameter.

Depending on the promise state, attaching segments have different effects. When a promise is pending, attached segments just get enqueued for execution once the promise is settled. Should the promise be already settled, attached segments get processed immediately instead.

#include <stdio.h>
#include <stddef.h>

void *report_success(void *context, void *params) {
    char *tag = (char *)context;
    uel_promise_t *promise = (uel_promise_t *)params;
    printf("promise %s resolved with %d\n", tag, promise->value);
    return NULL;
}

void *report_error(void *context, void *params) {
    char *tag = (char *)context;
    uel_promise_t *promise = (uel_promise_t *)params;
    printf("promise %s rejected with %d\n", tag, promise->value);
    return NULL;
}

// ...

// Assume p1, p2 and p3 as pending, resolved and rejected promises, respectively
// p2 is resolved with (uintptr_t)10 and p3 is rejected with (uintptr_t)100

uel_promise_after(
    p1,
    uel_closure_create(report_success, (void *)"p1"),
    uel_closure_create(report_error, (void *)"p1")
);
// Neither closure gets invoked. Instead, a new segment containing them is enqueued.

uel_promise_after(
    p2,
    uel_closure_create(report_success, (void *)"p2"),  
    uel_closure_create(report_error, (void *)"p2")
);
// As the promise is already resolved, instead of enqueueing a segment, the first
// closure is invoked.
// "p2 resolved with 10" is printed

uel_promise_after(
    p3,
    uel_closure_create(report_success, (void *)"p3"),  
    uel_closure_create(report_error, (void *)"p3")
);
// Similarly, as the promise is already rejected, the second closure gets invoked
// immediately.
// "p3 rejected with 100" is printed

uel_promise_resolve(p1, (uintptr_t)1);
// Upon settling, segments are removed from the queue and processed.
// "p1 resolved with 1" is printed

The uel_promise_after() function takes two closures as parameters. This is useful for splitting the execution path in two mutually exclusive routes depending on the settled state of the promise.

There are three other functions for enqueuing segments. They can be used for attaching segments that only produce effects on specific settled states or attaching the same closure to both states:

uel_promise_then(my_promise, my_closure);
// `my_closure` is invoked only when `my_promise` is resolved.
// The added segment is ignored if the promise is rejected.

uel_promise_catch(my_promise, my_closure);
// `my_closure` is invoked only when `my_promise` is rejected.
// The added segment is ignored if the promise is resolved.

uel_promise_always(my_promise, my_closure);
// `my_closure` is invoked always upon settling, regardless of the settled state

Segment chains and promise resettling

Any number of segments can be attached to some promise. They will be either processed immediately, in case the promise is already settled, or enqueued for processing upon settling in the future. Regardless, attached segments are always processed in registration order.

Chaining segments is useful because segments have the ability to commute between execution paths through promise resettling. To resettle a promise means changing its state and value.

void *store_char(void *context, void *params) {
    unsigned char *var = (unsigned char *)context;
    uel_promise_t *promise = (uel_promise_t *)params;
    if((uintptr_t)promise->value <= 255) {
        *var = (unsigned char)(uintptr_t)promise->value;
    } else {
        uel_promise_resettle(promise, UEL_PROMISE_REJECTED, (void *)"Value too big");
    }
    return NULL;
}

void *report_error(void *context, void *params) {
    uel_promise_t *promise = (uel_promise_t *)params;
    printf("promise was rejected with error '%s'\n", (char *)promise->value);
    return NULL;
}

void *done(void *context, void *params) {
    static char *states[3] = { "pending", "resolved", "rejected" };
    uel_promise_t *promise = (uel_promise_t *)params;
    printf("operation done with state '%s'\n", states[promise->state]);
}

unsigned char c1 = 0;
uel_promise_t *p1 = uel_promise_create(&store, uel_nop()); // Creates a pending promise
uel_promise_then(p1, uel_closure_create(store_char, (void *)&c1));
uel_promise_catch(p1, uel_closure_create(report_error, NULL));
uel_promsie_always(p1, uel_closure_create(done, NULL));

This builds a segment chain attached to promise p1. Each segment added describes one synchronous step to be executed for each of the two settled states.

Segments are processed sequentially, from first to last, starting with the closure relative to the state the promise was settled as. The following table illustrates this chain:

State Segment 1 Segment 2 Segment 3
resolved store_char nop done
rejected nop report_error done

The outcome of this chain is determined upon settling. For example, given the following resolution:

uel_promise_resolve(p1, (void *)10);

The first closure invoked is store_char, in segment 1. In the closure function, the test condition promise->value <= 255 is true, so the closure proceeds to store its value in the c1 variable.

As the promise remains resolved, it advances to segment 2, where it finds a nop (no-operation). This is due to the segment being attached via uel_promise_catch.

The promise then advances to segment 3, where it finds the done closure. The process then ends and the promise retains its state and value (UEL_PROMISE_RESOLVED and (void *)10 in this case). By then, c1 holds (char)10 and operation done with state 'resolved' is printed.

If instead the promise was resolved as:

uel_promise_resolve(p1, (void *)1000);

Then the test condition promise->value <= 255 would have failed. The store_char would then skip storing the value and would rather resettle the promise as rejected, with some error message as value. This effectively commutes the execution path to the rejected branch.

Once the store_char returns, as the promise is now rejected, the report_error closure is invoked and promise was rejected with error 'Value too big' is printed. The done closure is then invoked and prints operation done with state 'rejected'.

Similarly, if instead the promise had been rejected as:

uel_promise_reject(p1, (void *)"Asynchronous operation failed");

Segment 1 would be ignored, report_error would be invoked and print promise was rejected with error 'Asynchronous operation failed' and, at segment 3, done would be invoked and print operation done with state 'rejected'.

Resettling can also be used for recovering from errors if it commutes back to resolved state. This constitutes an excellent exception system that allows errors raised in loose asynchronous operations to be rescued consistently. Even exclusively synchronous processes can benefit from this error rescuing system.

As a last note, segments can also resettle a promise as UEL_PROMISE_PENDING. This halts the synchronous stage immediately, leaving any unprocessed segments in place. This phase can be restarted by either resolving or rejecting the promise again.

Nested promises

Promises can be nested into each other, allowing for complex asynchronous operations to compose a single segment chain. This provides superb flow control for related asynchronous operations that would otherwise produce a lot of spaghetti code.

Whenever a promise segment returns any non-null value, it is cast to a promise pointer. The original promise then transitions back to pending state and awaits for the returned promise to settle. Once this new promise is settled, the original promise is resumed with whatever state and value the new promise was settled as.

For instance, suppose the programmer is programming an MCU that possesses an ADC with an arbitrarily long buffer and a DMA channel. The program must start the ADC, which will sample N times and store it in its buffer. After N samples have been taken, the DMA channel must be instructed to move it out of the ADC buffer into some memory-mapped buffer, where it will be processed.

This could be easily accomplished with signals or callbacks, but would eventually lead to confusing and discontinuous code. With nested promises, however, it is easy to describe the whole process into one single chain.

Suppose this is the implementation for the DMA and ADC modules:

// ADC implementation
static uel_promise_t *adc_promise;

// This ISR is called when the ADC finishes sampling
// the instructed number of samples
static void adc_isr() {
    uel_promise_resolve(adc_promise, (void *)ADC_BUFFER);
    adc_promise = NULL;
    adc_isr_flag = 0;
}

// Launches the ADC and returns. This effectively starts an asynchronous operation.
static void *start_adc(void *context, void *params) {
    uintptr_t count = (uintptr_t)context;
    uel_promise_t *promise = (uel_promise_t *)params;
    ADC_COUNT = count;
    ADC_START = 1;
    adc_promise = promise;
    return NULL;
}

// Public interface function. Returns a promise that,
// when resolved, will contain the address of the buffer where
// data was sampled
uel_promise_t *adc_read(uintptr_t count) {
    return uel_promise_create(&store, uel_closure_create(start_adc, (void *)count));
}
// DMA implementation
static uel_promise_ t *dma_promise;

// This ISR is called when the DMA channel has finished
// moving data
static void dma_isr() {
    uel_promise_resolve(dma_promise, (void *)DMA_DESTINATION);
    dma_promise = NULL;
    dma_isr_flag = 0;
}

// Auxiliary structure to hold DMA mapping information
struct dma_mapping {
    void *source;
    void *destination;
    uintptr_t count;
};

// Launches the DMA channel, starting an asynchronous operation
static void *start_dma(void *context, void *params) {
    struct dma_mapping *mapping = (struct dma_mapping *)context;
    uel_promise_t *promise = (uel_promise_t *)params;
    DMA_SOURCE = mapping->source;
    DMA_DESTINATION = mapping->destination;
    DMA_COUNT = mapping->count;
    DMA_START = 1;
    adc_promise = promise;
    return NULL;
}

// Public interface function.
// Returns a promise that, when resolved, will hold the address
// of the buffer to where data was moved
uel_promise_t *dma_move(void *destination, void *source, uintptr_t count) {
    struct dma_mapping mapping = { source, destination, count };
    return uel_promise_create(&store, uel_closure_create(start_adc, (void *)&mapping));
}

Implementing the project requirements is this simple:

#define N  10 // Will take 10 samples

static void *move_from_buffer(void *context, void *params) {
    uel_promise_t *promise = (uel_promise_t *)params;
    void *source = promise->value;
    void *destination = context;
    return (void *)dma_move(destination, source, N);
}

static void *data_moved(void *context, void *params) {
    uel_promise_t *promise = (uel_promise_t *)params;
    printf("Data moved to %p\n", promise->value);
    return NULL;
}

// ...

uel_promise_t *promise = adc_read(N);
unsigned char destination[N];
uel_promise_then(promise, uel_closure_create(move_from_buffer, (void *)destination));
uel_promise_then(promise, uel_closure_create(data_moved, NULL));

Note that, in the above example, promises are resolved synchronously inside the ISR's. This may be not desirable due to performance reasons, but can be easily improved by enqueueing a closure that resolves nested promises into the event loop.

Promise destruction and promise helpers

To destroy a promise, call uel_promise_destroy(). This will release all segments and then the promise itself. Settling a promise after it has been destroyed is undefined behaviour.

Because settling and destroying promises are so frequent, there are helper functions that emit closures that automate this work:

// When invoked, destroys the promise
uel_closure_t destroyer = uel_promise_destroyer(promise);

// When invoked with some value, resolves the promise with that value
uel_closure_t resolver = uel_promise_resolver(promise);

// When invoked with some value, rejects the promise with that value
uel_closure_t rejecter = uel_promise_rejecter(promise);

Appendix B: Modules

Modules are independent units of behaviour, self-contained and self-allocated, with clear lifecycle hooks, interface and dependencies. They enforce separation of concerns and isolation by making clear how your code interacts with the rest of the application.

Modules are absolutely optional and very thin on the library side. They are basically a convention of how to write code in a fashion that works well with µEvLoop.

Modules can serve as a variety of purposes:

  • They can act as bridges to static data, such as SFRs;
  • They can be object factories, meant to distribute and recycle objects to other modules;
  • They can act as services, background processes that interact with other parts of the application in a sattelite-fashion.

Module creation

// File: my_module.h

#include <uevloop/utils/module.h>
#include <uevloop/system/containers/application.h>

typedef struct my_module my_module_t; // Private implementation

uel_module_t *my_module(uel_application_t *my_app);

// Any functions' signatures to operate the module go here
// File: my_module.c

#include "my_module.h"

struct my_module {
    uel_module_t base; // Inherits base interface

    // Other properties
};

// The config hook is used to prepare the module.
// When it is fired by the `application`, all modules are guaranteed to be
// registered, but may still be in an inconsistent state.
// Modules are loaded in the order they are supplied, so previous modules will
// already be configured.
static void config(uel_module_t *mod){
    // ...
}

// The launch hook is where the module should effectively start its functions.
// All modules here are guaranteed to be registered and properly configurated.
static void launch(uel_module_t *mod){
    // ...
}

// The constructor function is the only mandatory symbol to be exported.
// This function should initialise all independent or readly available data.
// It must not be called more than once per module.
uel_module_t *my_module(uel_application_t *my_app){
	static my_module_t module;
    uel_module_init(&module.base, config, launch, my_app);

    // Initialise other module's properties

    return &module.base;
}

Module registration

Modules are operated by the application component. It is responsible for loading, configuring and launching each module.

// File: my_app_modules.h

#include "my_module.h"

// Each module must be identified by a positive integer, zero-indexed.
// Storing module IDs in an enum makes it easy to add new modules in the future.
enum MY_APP_MODULES {
    MY_MODULE,
    // other modules...,
    MY_APP_MODULE_COUNT
};
// File: main.c

#include <uevloop/system/containers/application.h>
#include "my_app_modules.h"

// Creates the controlling application
uel_application_t my_app;
uel_app_init(&my_app);

// Declares a list of module pointers to be supplied to the application.
uel_module_t *modules[MY_APP_MODULE_COUNT];

// Individually initialising pointers in the list ensures IDs always
// match their corresponding module, even if they change during development.
modules[MY_MODULE] = my_module(&my_app);

// This loads the modules into the application:
// - The module list is stored as the application registry;
// - Each module's configuration hook is sequentially invoked, according to
//   their position in the registry;
// - Each module's launch hook is sequentially invoked, equally ordered by
//   the registry.
uel_app_load(&my_app, modules, MY_APP_MODULE_COUNT);

Dependency Injection

There are two method for injecting a registered module: parametrised injection and ad-hoc injection. Each is adequate for a different situation:

Parametrised injection

Parametrised dependencies are dependencies that are supplied during module construction.

Given the following module header:

// File: my_greeter.h

#include "my_module.h"
#include <uevloop/utils/module.h>
#include <uevloop/system/containers/application.h>

typedef struct my_greeter my_greeter_t;
uel_module_t *my_greeter(
    uel_application_t *my_app,
    const char *name,         // <-- Parametrised dependency
    my_module_t *other_module // <-- Dependencies can be other modules
);

The application loading procedure would be:

// File: my_app_modules.h

enum MY_APP_MODULES {
    MY_MODULE,
    MY_GREETER,
    MY_APP_MODULE_COUNT
};
// File: main.c

uel_module_t *modules[MY_APP_MODULE_COUNT];

modules[MY_MODULE] = my_module(&my_app);

// Injects parametrised dependencies `name` and `other_module`
modules[MY_GREETER] =
    my_greeter(&my_app, "King Kong", (my_module_t *)modules[MY_MODULE]);

uel_app_load(&my_app, modules, MY_APP_MODULE_COUNT)

Parametrised injection facilitates reasoning about what each module depends on and in which order they are loaded. This is the preferable way to inject dependencies into modules.

Ad-hoc injection

Ad-hoc injections can occur anywhere, including places outside the scope of the application's managed modules.

// ANYWHERE with access to `my_app`:

#include <uevloop/system/containers/application.h>
#include "my_greeter.h"
#include "my_app_modules.h"

my_greeter_t *greeter = (my_greeter_t *)uel_app_require(&my_app, MY_GREETER);

While Ad-hoc injections seem easier, they make more difficult to know on which modules some particular piece of code depends on. Also, because they require the modules to already be loaded into the registry, they cannot be used during the configuration phase.

Appendix C: Useful goodies

Iterators

Iterators are abstractions on arbitrary collections of items. They provide a uniform interface for yielding each element in the iterated collection, disregarding implementation details of such collection.

There are two iterator specialisations shipped with µEvLoop:

Array iterators

#include <uevloop/utils/iterator.h>
#define NUMS_LENGTH	5

uintptr_t nums[NUMS_LENGTH] = { 1, 2, 3, 4, 5 };
uel_iterator_array_t array_iterator =
        uel_iterator_array_create(nums, NUMS_LENGTH, sizeof(uintptr_t));

Linked list iterators

#include <uevloop/utils/iterator.h>
#include <uevloop/utils/linked-list.h>

uel_llist_node_t nodes[] = {
    { (void *)1, NULL },
    { (void *)2, NULL ),
    { (void *)3, NULL }
};
uel_llist_t list;
uel_llist_init(&list);
uel_llist_push_head(&list, &nodes[0]);
uel_llist_push_head(&list, &nodes[1]);

uel_iterator_llist_t llist_iterator = uel_iterator_llist_create(&list);

Iterator operation

Iterators live entirely on an embedded function pointer, next. It is responsible by yielding a pointer to each element in the collection.

// cast to generic iterator
uel_iterator_t *iterator = (uel_iterator_t *)&array_iterator;

// when supplied with `NULL` as parameter to `next`, yields
// the first element in the collection
int *current = NULL;

while(true){
    current = (int *)iterator->next(iterator, (void *)current);
    if(current != NULL){
        // do something
    }else break; // when there are no more elements , yields `NULL`
}

Iteration helpers

Besides manually operating an iterator, there are several iteration helpers that automatise work.

#include <uevloop/utils/closure.h>

void *accumulate(void *context, void *params){
    uintptr_t *acc = (uintptr_t *)context;
    uintptr_t num = *(uintptr_t *)params;
    *acc += num;
    return (void *)true; // required; returning false is equivalent to a `break`
}

uintptr_t acc = 0;
uel_closure_t acumulate_into_acc = uel_closure_create(accumulate, (void *)&acc);

uel_iterator_foreach(iterator, &accumulate_into_acc);
// if `iterator` is the same array iterator defined previously,
// acc == 15

There are many more iteration helpers, check the more details on the docs.

Custom iterators

Iterators are meant to be expansible. If you need to enumerate your own type, write an iterator specialisation:

struct my_collection_type{
    // whatever, could be a binary tree, a hash table etc
};
struct my_collection_iterator{
    uel_iterator_t base; // inherits the base interface
    // any other state necessary for iteration
};
void *my_collection_next(my_collection_iterator *iterator, void *last){
    // implement your iteration logic here.
    // remember:
    //   if last == NULL, yield the first element.
    //   if last == last element in collection, yield NULL.
}

struct my_collection_type collection; // initialised as needed
struct my_collection_iterator iterator = {
    { (uel_iterator_next_t)&my_collection_next, (void *)&collection },
    // initialise any state as necessary
};

Note that base is required to be the first member in your custom iterator structure. That way, a pointer to your iterator can be safely cast to uel_iterator_t * forth and back.

Conditionals

Conditionals are functional switches in the form of a tuple of closures <test, if_true, if_false>.

When applied to some input, this input is submitted to the test closure. If it returns true, if_true closure is invoked, otherwise, if_false is invoked. All closures take the same input as arguments.

#include <uevloop/utils/conditional.h>
#include <stdio.h>

void *is_divisible(void *context, void *params){
    uintptr_t divisor = (uintptr_t)context;
    uintptr_t dividend = (uintptr_t)params;
    return (void *)(uintptr_t)(dividend % divisor != 0);
}

void *qualify_number(void *context, void *params){
    char *parity = (char *)context;
    uintptr_t num = (uintptr_t)params;
    printf("%d is %s\n", num, parity);
    return NULL;
}

uel_closure_t is_even =
        uel_closure_create(is_divisible, (void *)2);
uel_closure_t print_even =
        uel_closure_create(qualify_number, (void *)"even");
uel_closure_t print_odd =
        uel_closure_create(qualify_number, (void *)"odd");

uel_conditional_t numer_parity;
uel_conditional_init(&number_parity, is_even, print_even, print_odd);

uel_conditional_apply(&number_parity, (void *)1);
// prints "1 is odd"

uel_conditional_apply(&number_parity, (void *)2);
// prints "2 is even"

Pipelines

Pipelines are sequences of closures whose outputs are connected to the next closure's input.

When applied to some input, this input is passed along each closure, being transformed along the way. Applying a pipeline returns the value returned by the last closure in it.

#include <uevloop/utils/pipeline.h>

void *exponentiate(void *context, void *params){
    uintptr_t power = (uintptr_t)context;
    uintptr_t base = (uintptr_t)params;
    return (void *)(uintptr_t)pow(base, power);
}
void *add(void *context, void *params){
    uintptr_t term1 = (uintptr_t)context;
    uintptr_t term2 = (uintptr_t)params;
    return (void *)(uintptr_t)(term1 + term2);
}

uel_closure_t square = uel_closure_create(exponentiate, (void *)2);
uel_closure_t increment = uel_closure_create(add, (void *)1);

// creates the math_pipeline, equivalent to the function f(x) = x^2 + 1
UEL_PIPELINE_DECLARE(math, square, increment);

uintptr_t res = (uintptr_t)uel_pipeline_apply(&math_pipeline, (void *)5);
// res == f(5) == (5^2 + 1) == 26

Functional helpers

Iterators, conditionals and pipelines are objects associated with synchronous operations.

To make more suitable to asynchronous contexts, there are numerous helpers that can abstract some of their operational details and export them into portable closures.

Please read the docs to find out more about them.

Automatic pools and automatic pointers

Automatic pools are wrappers objects that enhance the abilities of object pools. They allow constructors and destructors to be attached and, instead of yielding bare pointers, yield uel_autoptr_t automatic pointers.

Automatic pointers are objects that associate some object to the pool it is from, making it trivial to destroy the object regardless of access to its source. An automatic pointer issued to an object of type T can be safely cast to T**. Casting to any other pointer type is undefined behaviour.

Automatic pool creation

Automatic pools are created and initialised in very similar ways to object pools:

#include <uevloop/utils/automatic-pool.h>

#define TUPLE_POOL_SIZE_LOG2N    5    // 32 tuples
struct tuple {
    int a;
    int b;
};
UEL_DECLARE_AUTOPOOL_BUFFERS(struct tuple, TUPLE_POOL_SIZE_LOG2N, tuple);
uel_autopool_t tuple_pool;
uel_autopool_init(
    &tuple_pool,
    TUPLE_POOL_SIZE_LOG2N,
    sizeof(struct tuple),
    UEL_AUTOPOOL_BUFFERS(tuple)
);

Automatic pool operation and automatic pointer destruction

After an automatic pool has been created, it can allocate and deallocate objects, just like object pools.

struct tuple **t = (struct tuple **)uel_autopool_alloc(&tuple_pool);
if(t) {
    (**t).a = 1;
    (**t).b = 10;
    uel_autoptr_dealloc((uel_autoptr_t)t);
}

Automatic pool constructors and destructors

It is possible to attach a constructor and a destructor to some automatic pool. This are closures that will be invoked upon object allocation and deallocation, taking a bare pointer to the object being operated on.

static void *default_tuple(void *context, void *params) {
    struct tuple *value = (struct tuple *)context;
    struct tuple *t = (struct tuple *)params;
    *t = *value;
    return NULL;
}

static void *zero_tuple(void *context, void *params) {
    struct tuple *t = (struct tuple *)params;
    t->a = 0;
    t->b = 0;
    return NULL;
}

// ...

static struct tuple default_value = { 10, 100 };
uel_autopool_set_contructor(
    &tuple_pool,
    uel_closure_create(default_tuple, (void *)&default_value)
);
uel_autopool_set_destructor(
    &tuple_pool,
    uel_closure_create(zero_tuple, NULL)
);

struct tuple **t1 = (struct tuple **)uel_autopool_alloc(&tuple_pool);
// t1.a == 10, t1.b == 100

uel_autoptr_dealloc((uel_autoptr_t)t1);
// before t1 is dealloc'ed t1.a == 0, t1.b == 0

Concurrency model

µEvLoop is meant to run baremetal, primarily in simple single-core MCUs. That said, nothing stops it from being employed as a side library in RTOSes or in full-fledged x86_64 multi-threaded desktop applications.

Communication between asynchronous contexts, such as ISRs and side threads, is done through some shared data structures defined inside the library's core components. As whenever dealing with non-atomic shared memory, there must be synchronisation between accesses to these structures as to avoid memory corruption.

µEvLoop does not try to implement a universal locking scheme fit for any device. Instead, some generic critical section definition is provided.

Critical sections

By default, critical sections in µEvLoop are a no-op. They are provided as a set of macros that can be overriden by the programmer to implement platform specific behaviour.

For instance, while running baremetal it may be only necessary to disable interrupts to make sure accesses are synchronised. On a RTOS multi-threaded environment, on the other hand, it may be necessary to use a mutex.

There are three macros that define critical section implementation:

  1. UEL_CRITICAL_SECTION_OBJ_TYPE

If needed, a global critical section object can be declared. If this macro is defined, this object will be available to any critical section under the symbol uel_critical_section.

The UEL_CRITICAL_SECTION_OBJ_TYPE macro defines the type of the object. It is the programmer's responsibility to declare, globally allocate and initialise the object.

  1. UEL_CRITICAL_ENTER

Enters a new critical section. From this point until the critical section exits, no other thread or ISR may attempt to access the system's shared memory.

  1. UEL_CRITICAL_EXIT

Exits the current critical section. After this is called, any shared memory is allowed to be claimed by some party.

Motivation

I often work with small MCUs (8-16bits) that simply don't have the necessary power to run a RTOS or any fancy scheduling solution. Right now I am working on a new commercial project and felt the need to build something by my own. µEvLoop is my bet on how a modern, interrupt-driven and predictable embedded application should be. I am also looking for a new job and needed to improve my portfolio.

Roadmap

  • Better error handling
  • Logging helpers
  • Macro shortcuts for frequently used functions
  • More lifecycle hooks for modules
  • Application teardown/exit
  • Implement application signals
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].