Supercharge Your AIoT with ESP32 and TensorFlow Lite Micro — Unlock Powerful Edge AI!
AIoT (Artificial Intelligence of Things) is transforming our lives, and to enable smart devices with real-time decision-making capabilities, edge AI has become a key technology. Edge AI allows data to be analyzed directly on the device without sending it back to the cloud, reducing latency and enhancing privacy and security.
Today, let’s dive into a hands-on tutorial on how to use the ESP32 together with TensorFlow Lite Micro (TFLM) to build your own AIoT super brain, achieving true edge AI applications!

Contents
What is AIoT?
AIoT stands for Artificial Intelligence of Things, combining the two major technologies of AI (Artificial Intelligence) and IoT (Internet of Things). It enables devices not only to connect and transmit data but also to make autonomous decisions and respond in real-time.
For example:
- Traditional IoT: Sensors collect data → send to the cloud → wait for server analysis → receive results back
- AIoT: Sensors collect data → perform real-time inference and decision-making on the device → respond immediately
AIoT has a wide range of applications, including smart homes, industrial automation, healthcare monitoring, and intelligent transportation. The key to realizing AIoT is edge AI technology, which allows AI algorithms to run directly on the device without relying on the cloud. This significantly reduces latency while improving efficiency and privacy.
What are ESP32 and TFLM?
ESP32: A Powerful AIoT Chip
ESP32 is a dual-core Wi-Fi + Bluetooth SoC (System on Chip) developed by Espressif. It is highly capable and low-power, making it a popular choice for today’s IoT devices. The ESP32-S3 version even includes vector instructions for AI inference acceleration, enabling TinyML applications.
TensorFlow Lite Micro: A Lightweight AI Framework for MCUs
TensorFlow Lite Micro (TFLM) is a lightweight AI inference framework developed by Google’s TensorFlow team, specifically designed for resource-constrained microcontrollers (MCUs). It allows you to convert trained machine learning models into a format suitable for running on MCUs, enabling AI inference directly on the ESP32 without relying on the cloud.
Why Choose ESP32 and TFLM?
- High Scalability: Supports a wide range of sensors and communication protocols
- Real-time Computing: Reduces reliance on the cloud and lowers network latency
- Cost Savings: Eliminates bandwidth and cloud resource expenses
- Robust Ecosystem: Perfect integration with ESP-IDF and the official esp-tflite-micro component
- Low Power Consumption: Ideal for battery-powered smart devices
Development Environment
Before starting your programming, make sure to complete the following preparations:
- Install ESP-IDF (version 5.x or higher): ESP-IDF is the official development framework for programming the ESP32, and it supports multiple operating systems such as Windows, macOS, and Linux.
- ESP32 Development Board: An ESP32 board is required.
AIoT Project Structure
Create a clean AIoT project for esp32_tflm_demo like this:
esp32_tflm_demo/
├── CMakeLists.txt ← Top-level CMake for project location
├── Makefile ← Optional, for compatibility
├── sdkconfig ← Auto-generated build config
├── sdkconfig.defaults ← Optional default config
├── idf_component.yml ← Component dependencies
│
├── main/ ← Main application
│ ├── esp32_tflm_demo.cpp ← Initialization and inference code
│ ├── model_data.h ← Model as C array (xxd -i)
│ └── CMakeLists.txt ← Component registration
│
├── managed_components/ ← Auto-downloaded components since v5.x
│ └── espressif__esp-tflite-micro/
│ └── tensorflow/
│ └── lite/
│ ├── micro/
│ ├── schema/
│ └── ...
│
├── build/ ← Build output folder
└── components/ ← Optional: custom local components
Environment Setup
- Install ESP-IDF
ESP-IDF is the official development framework for the ESP32. Please follow the official ESP-IDF installation guide to complete the setup.
git clone --recursive https://github.com/espressif/esp-idf.git
cd esp-idf
./install.sh
. ./export.sh
- Create a New Project and Add esp-tflite-micro
idf.py create-project esp32_tflm_demo
cd esp32_tflm_demo
idf.py add-dependency "espressif/esp-tflite-micro=*"
- Modify
main/CMakeLists.txt
to Add Component Dependencies
idf_component_register(
SRCS "main.c"
INCLUDE_DIRS "."
REQUIRES esp-tflite-micro
)
Code
Here is a simple TFLM inference example that simulates two floating-point inputs and outputs the prediction result:
#include "esp_log.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"
#include "model_data.h" // Model data in C array format (generated from .tflite using xxd)
static const char *TAG = "TFLM_DEMO";
// Allocate memory for tensors (working memory for TFLM)
constexpr int tensor_arena_size = 8 * 1024;
static uint8_t tensor_arena[tensor_arena_size];
void app_main(void) {
ESP_LOGI(TAG, "Starting TensorFlow Lite Micro");
// Load the model from the compiled array
const tflite::Model* model = tflite::GetModel(g_model_tflite);
if (model->version() != TFLITE_SCHEMA_VERSION) {
ESP_LOGE(TAG, "Model schema version mismatch");
return;
}
// Register the necessary operators used in the model
tflite::MicroMutableOpResolver<4> resolver;
resolver.AddFullyConnected();
resolver.AddRelu();
resolver.AddLogistic();
// Create the interpreter
tflite::MicroInterpreter interpreter(model, resolver, tensor_arena, tensor_arena_size);
if (interpreter.AllocateTensors() != kTfLiteOk) {
ESP_LOGE(TAG, "Tensor allocation failed");
return;
}
// Get pointers to the model's input and output tensors
TfLiteTensor* input = interpreter.input(0);
TfLiteTensor* output = interpreter.output(0);
// Run inference in a loop
while (1) {
// Generate two random input values between 0.0 and 1.0
float x1 = (float)(rand() % 100) / 100.0f;
float x2 = (float)(rand() % 100) / 100.0f;
// Assign inputs to the model
input->data.f[0] = x1;
input->data.f[1] = x2;
// Perform inference
if (interpreter.Invoke() != kTfLiteOk) {
ESP_LOGE(TAG, "Inference failed");
continue;
}
// Read the prediction result
float pred = output->data.f[0];
ESP_LOGI(TAG, "Input: %.2f, %.2f, Prediction: %.3f", x1, x2, pred);
// Delay for readability
vTaskDelay(pdMS_TO_TICKS(2000));
}
}
Build a Simple Model with Python + TensorFlow
Here’s a demonstration of how to train a simple model → convert it to .tflite
→ prepare it for the ESP32.
Open your terminal or command prompt and run:
pip install tensorflow numpy
Write and Run the Python Script: train_and_convert.py
import tensorflow as tf
import numpy as np
# 1. Prepare data (XOR problem, ideal for microcontrollers)
X = np.array([[0,0], [0,1], [1,0], [1,1]], dtype=np.float32)
y = np.array([[0], [1], [1], [0]], dtype=np.float32)
# 2. Build a simple neural network model
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(2,)), # Input layer with 2 features
tf.keras.layers.Dense(4, activation='relu'), # Hidden layer with 4 neurons and ReLU
tf.keras.layers.Dense(1, activation='sigmoid') # Output layer with 1 neuron and sigmoid
])
# 3. Compile the model with optimizer, loss function, and metrics
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
# 4. Train the model on XOR data
model.fit(X, y, epochs=100, verbose=0)
# 5. Convert the trained Keras model to TensorFlow Lite format
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# 6. Save the converted model to a .tflite file
with open("model.tflite", "wb") as f:
f.write(tflite_model)
print("Model successfully saved as model.tflite")
After running the script, a model.tflite
file will be generated.
Run it directly in the Terminal on macOS:
xxd -i model.tflite > model_data.h
This will convert your TensorFlow Lite .tflite
model into a C-style array format and output it as model_data.h
, which can be used directly in C/C++ code.
Note: TensorFlow Version Compatibility
Before you begin, make sure your TensorFlow version is compatible with this example. It’s recommended to use TensorFlow 2.15.0 to avoid errors during model conversion or inference due to version differences. You can install the specific version using the following command:
pip uninstall tensorflow -y
pip install tensorflow==2.15.0
Using the correct version of TensorFlow helps ensure smooth model training and conversion, and prevents compatibility issues when running the model on ESP32 with TensorFlow Lite Micro.
Compile and Flash
After writing the code, you can use the ESP-IDF tools to build, flash, and monitor:
In the VS Code lower-left ESP-IDF toolbar:
- Click Build project
- Click Flash device
- Click Monitor device
After the program starts, you can view the output results in the ESP_Log:
I (282) TFLM_DEMO: Starting TensorFlow Lite Micro
I (282) TFLM_DEMO: Input: 0.33, 0.43, Prediction: 0.422
I (2282) TFLM_DEMO: Input: 0.62, 0.29, Prediction: 0.434
I (4282) TFLM_DEMO: Input: 0.00, 0.08, Prediction: 0.522
I (6282) TFLM_DEMO: Input: 0.52, 0.56, Prediction: 0.374
I (8282) TFLM_DEMO: Input: 0.56, 0.19, Prediction: 0.465
I (10282) TFLM_DEMO: Input: 0.11, 0.51, Prediction: 0.436
I (12282) TFLM_DEMO: Input: 0.43, 0.05, Prediction: 0.519
I (14282) TFLM_DEMO: Input: 0.08, 0.93, Prediction: 0.366
I (16282) TFLM_DEMO: Input: 0.30, 0.66, Prediction: 0.385
I (18282) TFLM_DEMO: Input: 0.69, 0.32, Prediction: 0.422
I (20282) TFLM_DEMO: Input: 0.17, 0.47, Prediction: 0.436
I (22282) TFLM_DEMO: Input: 0.72, 0.68, Prediction: 0.329
I (24282) TFLM_DEMO: Input: 0.80, 0.23, Prediction: 0.441
Conclusion
With TensorFlow Lite Micro, we have successfully deployed a trained neural network model onto the resource-constrained ESP32, achieving true edge AI inference. The entire process—from training the model, converting it to a .tflite
file, embedding it as a C array, to printing prediction results via esp_log
—does not rely on any external sensors or cloud resources, making it an excellent starter example for AIoT development.
This is not only a demonstration of what technology can do but also embodies the spirit of “AI for everyone, everywhere.” Now, all you need is an ESP32 and a bit of passion to start building your own AIoT super brain!
If you’re ready, the next steps could be adding sensors, network connectivity, or implementing your custom models—letting your ESP32 not just compute, but truly think.