This project was originally for the 2024 International Conference on Embedded Wireless Systems and Networks (EWSN) Call For Competitors. The goal of this competition was to implement and evaluate different approaches to designing efficient intermittently-powered sensing systems. Ideally, this would advance the field of battery-free computing and energy harvesting research.
The winning team will be the one maximizing the number of challenges solved over a predefined time window, during which the competition organizers will emulate varying degrees of available energy. To solve a challenge, contestants have to utilize hashcash, a SHA1-based proof-of-work algorithm for which a reference implementation is provided by the organizers.
Example: Using a challenge ‘EWSN2024‘ and a difficulty of 16-bit, we first create a starting string with a well-known format, e.g., 1:16:240403:EWSN2024::WXhnFeDleN:1. The segments of the string separated by a colon are: version (always 1), bits (16 as per difficulty), date (6 digits, fixed for the competition), resource (the challenge word given), extension (empty), random (a user-chosen random string), and a counter. We now need to check if this string’s SHA1 hash has 16 (as per the chosen difficulty) leading zeros (most significant bits). Our string (with the counter of 1) has a SHA1 hash of 20db26cb6b1e17a2079fc3daf05fd01a7ed08cb5, which only has two leading zeros. Thus, we increment the counter to 2 and hash the updated string and keep incrementing the counter until we find a string that has a hash with the required leading zeros. In this example, after 96620 attempts, the string with counter 96620 1:16:240403:EWSN2024::WXhnFeDleN:96620 has a hash of 0000babe195c81d00ecec4cd8f0dc1572ebd46d4 with 16 leading zeros (0x0000) as required. The string 1:16:240403:EWSN2024::WXhnFeDleN:96620 is hence the solution to the challenge.
At the start of an experiment, an external I2C FRAM memory will contain a list of challenges similar to the above example, to which contestants need to supply a solution despite the intermittent energy supply. At the end of an experiment, the computed solutions are automatically collected by E-Cube, the number of correct solutions is tallied against any incorrect solutions, and points awarded accordingly. Additional technical details, a reference solution, as well as the evaluation procedure will be posted on the competition’s website.
In Embedded and Wireless Systems, using energy harvesting devices with non-trivial compute performance demands new thinking on call stack and memory management. The core complexity resides not in a lack of available power but rather in the uncertainty of whether power will remain steady in the next clock cycle. We want to practice implementing novel approaches in scenarios that may corrupt the system or compromise the validity of output data.
In this competition, we're constrained to the MSP430FR5994 MCU and subjected to power shutoff that can be observed and handled by the firmware. We addressed these challenges through a structured approach: modular functions, state-saving in non-volatile memory (NVM), and optimized compiler-level control over memory and operations.
TLDR: Hashcash is a proof-of-work algorithm that uses a SHA-1 hashing algorithm. The goal is to find a string that, when hashed, has a certain number of leading zeros. When this string is found, the algorithm ends. Otherwise, the produced hash will be run through the SHA-1 algorithm again. This process is repeated until the desired hash is found.
Long: Just go to Wikipedia.
Our approach aims to be a hybrid inspired by many works in the area. Initially approaching the problem’s literature review, we recognized several trends:
Due to the importance of consistency, Write After Read (WAR) hazards are at the heart of the intermittent computing challenge. WAR hazards occur when a subsequent write operation modifies a value that has not been fully read by a prior operation, leading to inconsistent states following a power loss. To overcome these hazards, we used customized compiler instructions to minimize WAR occurrences. We restructured the code to minimize WAR operations by grouping them into atomic function blocks to ensure that a block either runs to completion or not at all. This design guarantees a consistent recovery state by preventing partially updated variables.
Compiler Control: By customizing compiler flags and reordering instructions, we ensured that memory and register operations were executed in an optimized sequence to avoid read-write conflicts and memory corruption.
Ensuring efficient reads and writes in our program involved managing the program state effectively and saving at crucial moments. Managing the state of the system across power interruptions required a robust state-saving mechanism in non-volatile memory (NVM). We implemented a checkpoint system that checks significant amounts of information at once, rather than as data is updated.
Context Structure: A custom context_t struct was used to store key variables, such as SHA-1 state values, message, and iterations. This structure was saved to NVM in batches after each atomic function to try and keep the variables as updated as possible.
Checkpointing: It is common practice to checkpoint, or save the current state of the program after key computations or expensive operations to avoid excessive computations and maintain important information in memory. We implemented this through function atomicity. After completing a function, all the data from that computation were stored in NVM before continuing.
Our system was designed with modular functions, isolating portions of the SHA-1 computation to complete before continuing to the next step in the algorithm.
Atomic Function Execution: Each function was designed to be completed before updating the context, ensuring that the system could resume from any function call without reprocessing previously completed work. We also aimed to minimize work within each function to promote frequent variable updates.
Instruction Pointer Flag: An instruction pointer flag is used to check which function was currently running when power was shut off. On startup, the program will continue the algorithm from the last function to ensure minimal data loss and avoid unnecessary computing.
Our implementation worked (for the most part). We were able to get a working implementation of hashcash. We also had a sample intermittent computing program running that continuously when power is cut. But we weren't able to integrate intermittent computing into the hashcash algorithm, so we didn't get a submission in to the competition.
Overall, I'm a little sad we didn't get to actually participate but I had a decent idea that we weren't going to have a shot, especially since we're going to be competing against full time PhD students and researchers. But I'm glad for the experience and it was fun, at least getting it implemented.