Robust Unified Heterogeneous Model Integration - A framework for AI model optimization and deployment, this GitHub provides the AI MCU Compiler for Renesas embedded platforms powered by EdgeCortix® MERA™.
RUHMI Framework1 provides a compiler and the necessary tools to convert machine learning models into C source code compatible with a range of Renesas MCUs powered by Arm Ethos-U NPUs. The software stack generates C source code while ensuring compatibility and tight integration with Renesas e2 studio. It also ships with Mera Quantizer, a post-training static INT8 quantizer, allowing more demanding models to meet the memory and latency constraints typical of microcontrollers and Ethos-U accelerators.
Get up and running in 6 steps (Ubuntu Linux / WSL):
# 1. Clone the repository
git clone https://github.com/renesas/ruhmi-framework-mcu.git
cd ruhmi-framework-mcu
# 2. Create and activate virtual environment
python3.10 -m venv mera-env
source mera-env/bin/activate
# 3. Install dependencies and MERA
pip install --upgrade pip
pip install decorator typing_extensions psutil attrs pybind11 cmake junitparser
pip install ./install/mera-2.5.0+pkg.3577-cp310-cp310-manylinux_2_27_x86_64.whl
# 4. Download a sample model
wget https://raw.githubusercontent.com/mlcommons/tiny/master/benchmark/training/anomaly_detection/trained_models/ad01_int8.tflite
mkdir -p models_int8 && mv ad01_int8.tflite models_int8/
# 5. Deploy the model to C code
cd scripts
python mcu_compile.py ../models_int8 ../deploy_output --npu
# 6. Check Model Metrics
python utils/check_model_metrics.py ../deploy_output/ad01_int8_NPUYour compiled C source code will be in deploy_output/ad01_int8_no_ospi/build/MCU/compilation/src/.
📖 For detailed installation instructions, see the Installation Guide.
- Renesas MCU RA8P1 series
- Renesas MCU RA8xx series (non-NPU devices)
RUHMI supports Ubuntu Linux and Windows. The table below outlines the prerequisites for each platform.
| Requirement | Ubuntu Linux | Windows |
|---|---|---|
| OS Version | Ubuntu 22.04 (recommended) | Windows 10 or 11 (11 recommended) |
| Python | Python 3.10.x via PyEnv or venv | Python 3.10.x via PyEnv or venv |
| Additional | — | Microsoft C++ runtime libraries |
📖 For detailed installation instructions, refer to the Installation Guide.
Sample scripts are provided for common use cases:
A unified compilation script mcu_compile.py handles both quantization (if needed) and deployment.
Capabilities:
- Deploy Pre-Quantized Models: Convert
.tflitedirectly to C code. - Deploy FP32 Models: Convert
.tflite,.onnxor.ptedirectly to C code. - Quantize & Deploy: Convert FP32 models to INT8 using calibration data, then to C code.
- Platform Support: Target CPU or NPU (Ethos-U55).
📖 Detailed guide on executing model compilation with sample scripts
After processing a model, you will find several files in your deployment directory. This includes some deployment artifacts generated during compilation that are worth keeping for debugging purposes.
The most important output is found under the directory <deployment_directory>/build/MCU/compilation/src.
This directory contains the model converted into a set of C99 source code files.
📖 Guide to the generated C source code
| Document | Description |
|---|---|
| AI Model Compiler API | API specification for AI Compiler python library |
| Tutorials | Hands-on Jupyter notebooks for learning the AI MCU Compiler workflow |
| Operator Support | Supported operators for each frontend framework |
| Visualizer | Model graph visualization tool |
| Inference Benchmark | Performance benchmarking guide to measure inference on RA8xx |
| Models Tested | List of tested models |
| Known Issues | Troubleshooting guides and platform-specific workarounds |
| Compiler Error Reference | Compile/runtime error references |
If you have any questions, please contact Renesas Technical Support or open an issue on GitHub.
Footnotes
-
RUHMI Framework's AI Compiler is powered by EdgeCortix® MERA™. ↩