diff --git a/examples/community/engagement-tracking/Facial_Metric_Inference.ipynb b/examples/community/engagement-tracking/Facial_Metric_Inference.ipynb new file mode 100644 index 0000000000..0ff566dc3e --- /dev/null +++ b/examples/community/engagement-tracking/Facial_Metric_Inference.ipynb @@ -0,0 +1,726 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "28c32112", + "metadata": {}, + "source": [ + "# Real-Time Engagement Classification with Roboflow Workflows\n", + "## Experimenting with Hosted Inference to Replace Traditional OpenFace/MARLIN Pipelines\n", + "\n", + "The goal of this project is to build a real-time system capable of estimating\n", + "student engagement directly from video. Engagement prediction has applications\n", + "in online education, classroom analytics, tutoring systems, and human–computer\n", + "interaction. The core challenge lies in accurately interpreting subtle facial\n", + "behaviors—such as gaze direction, head pose, and facial expressions—in\n", + "uncontrolled environments." + ] + }, + { + "cell_type": "markdown", + "id": "9c3aee8a", + "metadata": {}, + "source": [ + "# Previous implementation" + ] + }, + { + "cell_type": "markdown", + "id": "a58e67ce", + "metadata": {}, + "source": [ + "## The DAiSEE Dataset\n", + "DAiSEE (Dataset for Affective States in E-Environments) is a large-scale video\n", + "dataset containing **9,068 short clips** of students in natural learning\n", + "environments. Each clip is labeled with one of four engagement levels:\n", + "\n", + "1. **Very Low**\n", + "2. **Low**\n", + "3. **High**\n", + "4. **Very High**\n", + "\n", + "DAiSEE is challenging because:\n", + "- Labels are subjective\n", + "- Lighting and camera conditions vary widely\n", + "- Engagement is a high-level affective state, not directly visible" + ] + }, + { + "cell_type": "markdown", + "id": "720381f4", + "metadata": {}, + "source": [ + "## Feature Extraction: OpenFace and MARLIN\n", + "\n", + "### OpenFace 2.2\n", + "OpenFace is an open-source facial behavior analysis toolkit that extracts:\n", + "- Facial Action Units (AUs) \n", + "- Eye gaze direction \n", + "- Head pose \n", + "- Facial landmarks \n", + "\n", + "These features capture interpretable behavioral signals directly linked to\n", + "attention, focus, and affect." + ] + }, + { + "cell_type": "markdown", + "id": "02b6c0b3", + "metadata": {}, + "source": [ + "### MARLIN Embeddings\n", + "MARLIN is a deep learning model that produces a **768-dimensional embedding**\n", + "for every video frame. Unlike OpenFace’s engineered features, MARLIN provides\n", + "a rich, high-level representation of facial appearance and expression learned\n", + "from large-scale data.\n", + "\n", + "Together, OpenFace and MARLIN provide complementary information:\n", + "- **OpenFace:** interpretable, low-level behavioral cues \n", + "- **MARLIN:** abstract, high-level visual features " + ] + }, + { + "cell_type": "markdown", + "id": "e78749a8", + "metadata": {}, + "source": [ + "## EngageNet: A Multimodal Fusion Model\n", + "To combine these two modalities, we furthered **EngageNet**, a dual-stream\n", + "Transformer-based fusion architecture. EngageNet:\n", + "\n", + "- Accepts **MARLIN embeddings** as a 768-dimensional vector \n", + "- Accepts **OpenFace features** as a short temporal sequence (9 frames) \n", + "- Processes each stream independently \n", + "- Uses Transformers to model temporal dependencies \n", + "- Fuses them into a joint representation \n", + "- Predicts one of the four DAiSEE engagement levels\n", + "\n", + "This notebook demonstrates:\n", + "1. The model architecture and validation performance \n", + "2. Why engagement classification is tested using a Roboflow-hosted model" + ] + }, + { + "cell_type": "markdown", + "id": "4cf0a8fc", + "metadata": {}, + "source": [ + "## Import and Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a7a53347", + "metadata": {}, + "outputs": [], + "source": [ + "import tensorflow as tf # pyright: ignore[reportMissingImports]\n", + "from keras import Input, Model # pyright: ignore[reportMissingImports]\n", + "from keras.layers import ( # pyright: ignore[reportMissingImports]\n", + " Dense, Dropout, LayerNormalization,\n", + " GlobalAveragePooling1D, MultiHeadAttention, Concatenate\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "8ebbed47", + "metadata": {}, + "source": [ + "## Fusion Model Architecture" + ] + }, + { + "cell_type": "markdown", + "id": "436d8fc6", + "metadata": {}, + "source": [ + "This function builds a dual-stream neural network that fuses MARLIN and OpenFace features using a Transformer layer to predict one of four engagement levels." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1cd8669c", + "metadata": {}, + "outputs": [], + "source": [ + "def build_fusion_model(hidden_dim=128, dropout_rate=0.4, num_heads=2, num_layers=1):\n", + "\n", + " # MARLIN Input Stream\n", + " marlin_input = Input(shape=(1, 768), name=\"marlin_input\")\n", + " x1 = LayerNormalization()(marlin_input)\n", + " x1 = GlobalAveragePooling1D()(x1)\n", + " x1 = Dense(256, activation=\"relu\")(x1)\n", + " x1 = Dropout(dropout_rate)(x1)\n", + " x1 = Dense(hidden_dim, activation=\"relu\")(x1)\n", + "\n", + " # Transformer layers\n", + " for _ in range(num_layers):\n", + " attn_out = MultiHeadAttention(num_heads=num_heads, key_dim=64)(x2, x2)\n", + " x2 = LayerNormalization()(x2 + attn_out)\n", + "\n", + " x2 = GlobalAveragePooling1D()(x2)\n", + " x2 = Dense(256, activation=\"relu\")(x2)\n", + " x2 = Dropout(dropout_rate)(x2)\n", + " x2 = Dense(hidden_dim, activation=\"relu\")(x2)\n", + "\n", + " # Fusion\n", + " fused = Concatenate()([x1, x2])\n", + " fused = Dense(hidden_dim, activation=\"relu\")(fused)\n", + " fused = Dropout(dropout_rate)(fused)\n", + " output = Dense(4, activation=\"softmax\")(fused)\n", + "\n", + " return Model(inputs=[marlin_input, openface_input], outputs=output) # pyright: ignore[reportUndefinedVariable]" + ] + }, + { + "cell_type": "markdown", + "id": "6df10e27", + "metadata": {}, + "source": [ + "## Model Summary" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "691df0f7", + "metadata": {}, + "outputs": [], + "source": [ + "fusion_model = build_fusion_model()\n", + "fusion_model.summary()" + ] + }, + { + "cell_type": "markdown", + "id": "ce298c72", + "metadata": {}, + "source": [ + "## Training Setup" + ] + }, + { + "cell_type": "markdown", + "id": "50e298f6", + "metadata": {}, + "source": [ + "This code compiles the fusion model with an adaptive learning-rate schedule." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "393f42d9", + "metadata": {}, + "outputs": [], + "source": [ + "fusion_model.compile(\n", + " loss=\"sparse_categorical_crossentropy\",\n", + " optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),\n", + " metrics=[\"accuracy\"],\n", + ")\n", + "\n", + "checkpoint_cb = tf.keras.callbacks.ModelCheckpoint(\n", + " filepath=\"fusion_best.keras\",\n", + " monitor=\"val_accuracy\",\n", + " save_best_only=True,\n", + " verbose=1,\n", + ")\n", + "\n", + "def cosine_annealing(epoch, lr, T_max=200, eta_min=1e-6):\n", + " import math\n", + " return eta_min + (lr - eta_min) * (1 + math.cos(math.pi * epoch / T_max)) / 2\n", + "\n", + "lr_schedule_cb = tf.keras.callbacks.LearningRateScheduler(\n", + " cosine_annealing,\n", + " verbose=0,\n", + ")\n", + "\n", + "history = fusion_model.fit(\n", + " x=[train_x1, train_x2], # pyright: ignore[reportUndefinedVariable]\n", + " y=train_y, # pyright: ignore[reportUndefinedVariable]\n", + " validation_data=([val_x1, val_x2], val_y), # pyright: ignore[reportUndefinedVariable]\n", + " epochs=200,\n", + " batch_size=32,\n", + " class_weight=class_weights_dict, # pyright: ignore[reportUndefinedVariable]\n", + " callbacks=[checkpoint_cb, lr_schedule_cb],\n", + " verbose=1,\n", + ")\n" + ] + }, + { + "cell_type": "markdown", + "id": "e9547a6d", + "metadata": {}, + "source": [ + "## Validation Performance Snippet" + ] + }, + { + "cell_type": "markdown", + "id": "23d9b8c6", + "metadata": {}, + "source": [ + "This block evaluates the saved fusion model on the validation set, producing accuracy, a classification report, and a confusion matrix." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7c579662", + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "from sklearn.metrics import classification_report, confusion_matrix # pyright: ignore[reportMissingImports]\n", + "\n", + "# Load best checkpoint if you want the best-val model\n", + "best_model = tf.keras.models.load_model(\"fusion_best.keras\", compile=False)\n", + "\n", + "val_probs = best_model.predict([val_x1, val_x2]) # pyright: ignore[reportUndefinedVariable]\n", + "val_preds = np.argmax(val_probs, axis=1)\n", + "\n", + "print(\"Validation accuracy:\",\n", + " np.mean(val_preds == val_y)) # pyright: ignore[reportUndefinedVariable]\n", + "\n", + "print(\"\\nClassification report:\")\n", + "print(classification_report(val_y, val_preds, digits=3)) # pyright: ignore[reportUndefinedVariable]\n", + "\n", + "print(\"\\nConfusion matrix:\")\n", + "print(confusion_matrix(val_y, val_preds)) # pyright: ignore[reportUndefinedVariable]" + ] + }, + { + "cell_type": "markdown", + "id": "676ea8bb", + "metadata": {}, + "source": [ + "## Visual overview of performance" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7d72a6f3", + "metadata": {}, + "outputs": [], + "source": [ + "best_val_acc = np.max(history.history[\"val_accuracy\"])\n", + "final_train_acc = history.history[\"accuracy\"][-1]\n", + "final_val_acc = history.history[\"val_accuracy\"][-1]\n", + "final_train_loss = history.history[\"loss\"][-1]\n", + "final_val_loss = history.history[\"val_loss\"][-1]\n", + "epochs = len(history.history[\"accuracy\"])\n", + "\n", + "summary_text = (\n", + " \"EngageNet Fusion — Final Training Summary\\n\\n\"\n", + " f\"Epochs Trained: {epochs}\\n\"\n", + " f\"Final Train Accuracy: {final_train_acc:.4f}\\n\"\n", + " f\"Final Val Accuracy: {final_val_acc:.4f}\\n\"\n", + " f\"Best Val Accuracy: {best_val_acc:.4f}\\n\"\n", + " f\"Final Train Loss: {final_train_loss:.6f}\\n\"\n", + " f\"Final Val Loss: {final_val_loss:.4f}\\n\"\n", + ")\n", + "\n", + "summary_img = Image.new(\"RGB\", (800, 350), color=(245, 245, 245)) # pyright: ignore[reportUndefinedVariable]\n", + "draw = ImageDraw.Draw(summary_img) # pyright: ignore[reportUndefinedVariable]\n", + "draw.text((25, 25), summary_text, fill=(0, 0, 0))\n", + "\n", + "summary_img.save(\"training_plots/training_summary.png\")" + ] + }, + { + "cell_type": "markdown", + "id": "185cbceb", + "metadata": {}, + "source": [ + "![Training Summary](assets/summary.png)" + ] + }, + { + "cell_type": "markdown", + "id": "d5f6080a", + "metadata": {}, + "source": [ + "## Training and Validation curves" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "df0ee518", + "metadata": {}, + "outputs": [], + "source": [ + "history = model.fit( # pyright: ignore[reportUndefinedVariable]\n", + " x=[train_x1, train_x2], # pyright: ignore[reportUndefinedVariable]\n", + " y=train_y, # pyright: ignore[reportUndefinedVariable]\n", + " validation_data=([val_x1, val_x2], val_y), # pyright: ignore[reportUndefinedVariable]\n", + " epochs=200,\n", + " batch_size=32,\n", + " class_weight=class_weights_dict, # pyright: ignore[reportUndefinedVariable]\n", + " callbacks=[checkpoint_callback, lr_scheduler], # pyright: ignore[reportUndefinedVariable]\n", + " verbose=1\n", + ")\n", + "\n", + "# Create output folder\n", + "os.makedirs(\"training_plots\", exist_ok=True) # pyright: ignore[reportUndefinedVariable]\n", + "\n", + "# Save Accuracy Plot\n", + "plt.figure(figsize=(8, 5)) # pyright: ignore[reportUndefinedVariable]\n", + "plt.plot(history.history[\"accuracy\"], label=\"Train Accuracy\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.plot(history.history[\"val_accuracy\"], label=\"Validation Accuracy\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.xlabel(\"Epoch\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.ylabel(\"Accuracy\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.title(\"EngageNet Fusion: Training vs Validation Accuracy\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.legend() # pyright: ignore[reportUndefinedVariable]\n", + "plt.grid(True, alpha=0.3) # pyright: ignore[reportUndefinedVariable]\n", + "plt.savefig(\"training_plots/fusion_training_accuracy.png\", dpi=300, bbox_inches=\"tight\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.close() # pyright: ignore[reportUndefinedVariable]\n", + "\n", + "# Save Loss Plot\n", + "plt.figure(figsize=(8, 5)) # pyright: ignore[reportUndefinedVariable]\n", + "plt.plot(history.history[\"loss\"], label=\"Train Loss\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.plot(history.history[\"val_loss\"], label=\"Validation Loss\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.xlabel(\"Epoch\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.ylabel(\"Loss\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.title(\"EngageNet Fusion: Training vs Validation Loss\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.legend() # pyright: ignore[reportUndefinedVariable]\n", + "plt.grid(True, alpha=0.3) # pyright: ignore[reportUndefinedVariable]\n", + "plt.savefig(\"training_plots/fusion_training_loss.png\", dpi=300, bbox_inches=\"tight\") # pyright: ignore[reportUndefinedVariable]\n", + "plt.close() # pyright: ignore[reportUndefinedVariable]" + ] + }, + { + "cell_type": "markdown", + "id": "52c77ac1", + "metadata": {}, + "source": [ + "![Training Accuracy](assets/fusion_training_accuracy.png)" + ] + }, + { + "cell_type": "markdown", + "id": "dbdeb142", + "metadata": {}, + "source": [ + "![Loss Curve](assets/fusion_training_loss.png)\n" + ] + }, + { + "cell_type": "markdown", + "id": "b145ce6d", + "metadata": {}, + "source": [ + "## Roboflow implementation\n", + "\n", + "One major limitation of my offline model was its lack of reproducibility and its inability to run real-time inference in a portable way. Docker initially seemed like a solution, since it could bundle dependencies and weights into a shareable environment, but it proved impractical: the model remained tied to my local repo, and Docker’s headless nature prevents access to webcams and other device-level features." + ] + }, + { + "cell_type": "markdown", + "id": "7eb36352", + "metadata": {}, + "source": [ + "To address this, I began exploring more lightweight, fully hosted alternatives. This led me to experiment with Roboflow’s single-label image classification workflow—uploading representative frames, training a hosted model, and testing whether it could reliably separate different engagement levels." + ] + }, + { + "cell_type": "markdown", + "id": "8abf2d79", + "metadata": {}, + "source": [ + "## Example Predictions from the Roboflow Classifier\n" + ] + }, + { + "cell_type": "markdown", + "id": "9d4cee0f", + "metadata": {}, + "source": [ + "![Attentive](assets/Attentive.png)" + ] + }, + { + "cell_type": "markdown", + "id": "c0b48896", + "metadata": {}, + "source": [ + "![Confused](assets/Confused.png)" + ] + }, + { + "cell_type": "markdown", + "id": "01d2f3d6", + "metadata": {}, + "source": [ + "![Non-Attentive](assets/Non-Attentive.png)" + ] + }, + { + "cell_type": "markdown", + "id": "94dbb6b4", + "metadata": {}, + "source": [ + "These sample predictions show that the Roboflow classifier successfully learned coarse engagement cues such as gaze direction, head orientation, and posture. However, the model is limited by the homogeneity of the dataset - same subject, environment, lighting, and clothing - which means further generalization would require additional, more diverse training data." + ] + }, + { + "cell_type": "markdown", + "id": "dc722003", + "metadata": {}, + "source": [ + "## Custom Roboflow Workflow for API-Based Inference" + ] + }, + { + "cell_type": "markdown", + "id": "c48708bd", + "metadata": {}, + "source": [ + "![Custom-Workflow](assets/Custom-Workflow.png)" + ] + }, + { + "cell_type": "markdown", + "id": "b4638afe", + "metadata": {}, + "source": [ + "## Connecting to Single-Label Classification Model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a3211a66", + "metadata": {}, + "outputs": [], + "source": [ + "import cv2\n", + "import os\n", + "from inference_sdk import InferenceHTTPClient\n", + "from dotenv import load_dotenv\n", + "\n", + "load_dotenv() # Important keys are stored in the env. file\n", + "\n", + "API_URL = os.getenv(\"ROBOFLOW_API_URL\")\n", + "API_KEY = os.getenv(\"ROBOFLOW_API_KEY\")\n", + "\n", + "client = InferenceHTTPClient(\n", + " api_url= API_URL,\n", + " api_key= API_KEY \n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "0f1b1f0b", + "metadata": {}, + "source": [ + "## Capturing one frame" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b5c35fbe", + "metadata": {}, + "outputs": [], + "source": [ + "# capture one frame\n", + "cap = cv2.VideoCapture(1)\n", + "ret, frame = cap.read()\n", + "cap.release()" + ] + }, + { + "cell_type": "markdown", + "id": "c7f72d87", + "metadata": {}, + "source": [ + "# Run workflow to receive JSON response" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d5497e28", + "metadata": {}, + "outputs": [], + "source": [ + "if ret:\n", + " # save it temporarily\n", + " cv2.imwrite(\"frame.jpg\", frame)\n", + " \n", + " # run workflow\n", + " result = client.run_workflow(\n", + " workspace_name=\"testing-qqggh\",\n", + " workflow_id=\"custom-workflow\",\n", + " images={\"image\": \"frame.jpg\"}\n", + " )" + ] + }, + { + "cell_type": "markdown", + "id": "89048577", + "metadata": {}, + "source": [ + "## Create annotated engagement frame" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a0d03494", + "metadata": {}, + "outputs": [], + "source": [ + "# Parse JSON response to extract class name and confidence\n", + "try:\n", + " predictions = result[0]['predictions']['predictions']\n", + " if predictions:\n", + " class_name = predictions[0]['class']\n", + " confidence = predictions[0]['confidence']\n", + " else:\n", + " # Fallback to top-level values if predictions array is empty\n", + " class_name = result[0]['predictions'].get('top', 'unknown')\n", + " confidence = result[0]['predictions'].get('confidence', 0.0)\n", + "except (KeyError, IndexError, TypeError) as e:\n", + " print(f\"Error parsing result: {e}\")\n", + " print(f\"Result structure: {result}\")\n", + " class_name = \"unknown\"\n", + " confidence = 0.0\n", + "\n", + "# Format class name (capitalize properly, handling hyphenated words)\n", + "if '-' in class_name:\n", + " formatted_class = '-'.join(word.capitalize() for word in class_name.split('-'))\n", + "else:\n", + " formatted_class = class_name.capitalize()\n", + "\n", + "confidence_pct = confidence * 100\n", + "confidence_str = f\"{confidence_pct:.2f}%\"\n", + "\n", + "# Load the original frame.jpg image\n", + "annotated_frame = cv2.imread(\"frame.jpg\")\n", + "\n", + "if annotated_frame is not None:\n", + " # Draw box overlay in top-left corner \n", + " box_x, box_y = 0, 0\n", + " text = f\"{formatted_class} {confidence_str}\"\n", + " \n", + " # Get text size to determine box dimensions\n", + " font = cv2.FONT_HERSHEY_SIMPLEX\n", + " font_scale = 0.85 \n", + " thickness = 2\n", + " (text_width, text_height), baseline = cv2.getTextSize(text, font, font_scale, thickness)\n", + " \n", + " # Box dimensions with padding (adjusted for better proportions)\n", + " padding_x = 14\n", + " padding_y = 10\n", + " box_width = text_width + padding_x * 2\n", + " box_height = text_height + baseline + padding_y * 2\n", + " \n", + " # Draw filled purple rectangle\n", + " purple_color = (128, 0, 128) \n", + " cv2.rectangle(annotated_frame, \n", + " (box_x, box_y), \n", + " (box_x + box_width, box_y + box_height), \n", + " purple_color, \n", + " -1)\n", + " \n", + " # Add white text overlay\n", + " text_x = box_x + padding_x\n", + " text_y = box_y + text_height + padding_y\n", + " white_color = (255, 255, 255)\n", + " cv2.putText(annotated_frame, text, (text_x, text_y), \n", + " font, font_scale, white_color, thickness)\n", + " \n", + " # Display the annotated image\n", + " cv2.imshow(\"Roboflow Test - Annotated\", annotated_frame)\n", + " print(f\"Class: {formatted_class}, Confidence: {confidence_str}\")\n", + " print(\"Press any key to close the window...\")\n", + " cv2.waitKey(0)\n", + " cv2.destroyAllWindows()\n", + " \n", + " # Save annotated image\n", + " output_filename = \"frame_annotated.jpg\"\n", + " cv2.imwrite(output_filename, annotated_frame)\n", + " print(f\"Annotated image saved as {output_filename}\")\n", + "else:\n", + " print(\"Could not load frame.jpg for annotation.\")" + ] + }, + { + "cell_type": "markdown", + "id": "c29027fe", + "metadata": {}, + "source": [ + "## Output" + ] + }, + { + "cell_type": "markdown", + "id": "f15c032c", + "metadata": {}, + "source": [ + "![Annotated-Attentive](assets/frame_annotated_attentive.jpg)" + ] + }, + { + "cell_type": "markdown", + "id": "062004cf", + "metadata": {}, + "source": [ + "![Annotated-Non-Attentive](assets/frame_annotated_non_attentive.jpg)" + ] + }, + { + "cell_type": "markdown", + "id": "854468d9", + "metadata": {}, + "source": [ + "![Annotated-Confused](assets/frame_annotated_confused.jpg)" + ] + }, + { + "cell_type": "markdown", + "id": "32434f17", + "metadata": {}, + "source": [ + "## Results and Conclusions" + ] + }, + { + "cell_type": "markdown", + "id": "f5dadfeb", + "metadata": {}, + "source": [ + "Offline validation mattered less than real-time accuracy, since my earlier MARLIN + MediaPipe pipeline was slow and only correct about 45–50% of the time. The Roboflow model, trained on far less data, ran instantly and produced consistently accurate predictions (aside from occasional “confused” cases). Tests in different lighting showed similarly strong behavior, making this a far more practical option for real-time engagement classification." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.0" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}