Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
114 commits
Select commit Hold shift + click to select a range
8c0e19a
Add LangExtract recognizer for PII extraction
RonShakutai Oct 30, 2025
e3f68d1
refine the docs
RonShakutai Oct 30, 2025
811c639
narrow support for oollama only
RonShakutai Oct 30, 2025
3640527
Refactor LangExtract tests to use Ollama; remove API key dependency
RonShakutai Oct 30, 2025
9a99a0d
adding first draft of docker compose
RonShakutai Nov 2, 2025
3c88dd9
Update model_id in tests to use 'gemma2:2b' instead of 'gemini-2.5-fl…
RonShakutai Nov 2, 2025
feaee53
Refactor LangExtract documentation to focus on Ollama support; remove…
RonShakutai Nov 2, 2025
013198e
Update README to remove Ollama setup instructions and clarify integra…
RonShakutai Nov 2, 2025
f8326f9
Enhance Ollama installation script with progress messages and error h…
RonShakutai Nov 2, 2025
f8b7d03
auto ruff fixes
RonShakutai Nov 2, 2025
dcc1450
Enhance LangExtractRecognizer tests with real Ollama integration
RonShakutai Nov 2, 2025
64d0f09
Remove unnecessary line breaks in LLM-based PII detection section of …
RonShakutai Nov 2, 2025
0905619
Add LangExtract LLM-based PII detection test and configuration
RonShakutai Nov 2, 2025
d5dde47
Improve Ollama availability check with setup attempt message
RonShakutai Nov 2, 2025
0afa506
Increase wait time for services and update healthcheck parameters for…
RonShakutai Nov 2, 2025
7fbf111
Add Ollama setup for Analyzer tests and improve availability check
RonShakutai Nov 2, 2025
31c0f93
Set timeout for Ollama setup in Analyzer tests to 8 minutes
RonShakutai Nov 2, 2025
f087983
Enhance Ollama setup for Analyzer tests with improved installation an…
RonShakutai Nov 2, 2025
ed9917f
Update Ollama model references from gemma2:2b to llama3.2:1b across c…
RonShakutai Nov 2, 2025
ffa44e1
Update model references from llama3.2:1b to gemma2:2b across configur…
RonShakutai Nov 2, 2025
1e0d41b
Remove 'enabled' configuration from LangExtract settings in YAML file…
RonShakutai Nov 2, 2025
574312f
Update Ollama service configuration: change port mapping and modify h…
RonShakutai Nov 2, 2025
304dbf3
Refactor logging in LangExtractRecognizer: reduce verbosity and impro…
RonShakutai Nov 2, 2025
f808daf
Update Ollama service configuration: modify port mapping and healthch…
RonShakutai Nov 2, 2025
27ee08b
Update CI workflow and tests: reduce sleep duration and add environme…
RonShakutai Nov 2, 2025
1b9e97d
Reduce sleep duration in CI workflow from 150 to 60 seconds
RonShakutai Nov 2, 2025
f9e5435
Update LangExtract model references from gemma2:2b to gemma3:1b and r…
RonShakutai Nov 3, 2025
299f0ea
docs and prompt fixes
RonShakutai Nov 5, 2025
2e593b9
finalizing the pr
RonShakutai Nov 10, 2025
109a25e
Update Ollama image to latest version and add LangExtract PII/PHI ext…
RonShakutai Nov 10, 2025
054ea1c
fix bad example
RonShakutai Nov 10, 2025
fe10361
fix unit-tests
RonShakutai Nov 10, 2025
af51ceb
refactor: clean up .env file and simplify skip_engine logic in tests
RonShakutai Nov 10, 2025
0e866dd
Merge branch 'main' of https://github.com/microsoft/presidio into fea…
RonShakutai Nov 10, 2025
159c955
chore: add a new line to .env file for better readability
RonShakutai Nov 10, 2025
a4a63c4
chore: remove unnecessary blank line from .env file
RonShakutai Nov 10, 2025
320fe9e
revert .env
RonShakutai Nov 10, 2025
d77958a
intial commit
RonShakutai Nov 10, 2025
286b78a
Remove skip marker for spacy_nlp_engine fixture
RonShakutai Nov 10, 2025
b50a6be
Remove skip markers for stanza and transformers NLP engine fixtures
RonShakutai Nov 10, 2025
d7a91c2
Merge branch 'fix-ci-unit-tests' of https://github.com/microsoft/pres…
RonShakutai Nov 10, 2025
c51a695
Remove Ollama recognizer test and update default recognizers configur…
RonShakutai Nov 10, 2025
f56b97e
Remove unused Ollama recognizer configuration and update prompt file …
RonShakutai Nov 10, 2025
92b1136
Add end-to-end tests for API anonymization and redaction features
RonShakutai Nov 11, 2025
253c220
Merge branch 'main' of https://github.com/microsoft/presidio into fea…
RonShakutai Nov 11, 2025
4b82e84
Remove unused Ollama recognizer imports and related tests
RonShakutai Nov 11, 2025
8c81ca7
Update requirements and improve Ollama recognizer availability checks…
RonShakutai Nov 11, 2025
d205959
Fix formatting in requirements.txt for analyzer and anonymizer depend…
RonShakutai Nov 11, 2025
2ee5179
Update Ollama model ID from gemma3:1b to gemma2:2b in configuration a…
RonShakutai Nov 11, 2025
1dc2328
gemma2:2b
RonShakutai Nov 11, 2025
26d5097
finalizing the pr
RonShakutai Nov 14, 2025
7684c79
Remove unused ABC import from lm_recognizer.py
RonShakutai Nov 14, 2025
097ef84
Fix indentation in docker-compose.yml for volumes section
RonShakutai Nov 14, 2025
e693ca6
Fix line break for clarity in adding_recognizers.md
RonShakutai Nov 14, 2025
95e559a
Add timeout settings for Ollama recognizer and test cases
RonShakutai Nov 14, 2025
7b7f8fa
Refactor timeout comment for clarity in OllamaLangExtractRecognizer
RonShakutai Nov 14, 2025
f10a103
Update Ollama model version and add configuration for LangExtract rec…
RonShakutai Nov 14, 2025
703aab6
Update examples_file path in configuration for Ollama recognizer
RonShakutai Nov 14, 2025
034a1bf
Remove timeout decorator from Ollama recognizer
RonShakutai Nov 14, 2025
768cb0e
Add rerun settings to unit and E2E tests for improved stability
RonShakutai Nov 14, 2025
0f8fad6
Remove rerun settings from unit and E2E test commands for simplification
RonShakutai Nov 14, 2025
aed8a2d
Set max-parallel to 2 for local build and E2E tests
RonShakutai Nov 14, 2025
5e4e0a0
Remove max-parallel setting from local build and E2E tests
RonShakutai Nov 14, 2025
0220c5b
move poetry cache dir
tamirkamara Nov 16, 2025
f30bb7a
Merge branch 'tamirkamara/fix-disk-space' of https://github.com/micro…
RonShakutai Nov 16, 2025
7b5f75a
Merge branch 'main' into feature-langextract
SharonHart Nov 16, 2025
2e8165c
pr changes
RonShakutai Nov 16, 2025
0ef9d31
code review changes
RonShakutai Nov 16, 2025
be7ee67
ruff check
RonShakutai Nov 16, 2025
1e1e107
remove unused json import in test_ollama_recognizer.py
RonShakutai Nov 16, 2025
083e853
refactor test names for clarity and consistency in test_ollama_recogn…
RonShakutai Nov 16, 2025
3a339b7
finalizing the PR
RonShakutai Nov 23, 2025
43a196e
Merge branch 'main' into feature-langextract
RonShakutai Nov 23, 2025
d377a56
self code review fixes
RonShakutai Nov 23, 2025
e27faa7
Merge branch 'feature-langextract' of https://github.com/microsoft/pr…
RonShakutai Nov 23, 2025
f9097ff
Refactor LangExtractRecognizer to use yaml for configuration loading
RonShakutai Nov 23, 2025
68275e3
ruff fixes
RonShakutai Nov 23, 2025
21fb33c
Update error messages in OllamaLangExtractRecognizer tests for clarity
RonShakutai Nov 23, 2025
679b363
CR comment addressed
RonShakutai Nov 23, 2025
73cb9d1
Remove unused variables from Jinja2 prompt rendering in LangExtractRe…
RonShakutai Nov 23, 2025
da1c563
exporting functionality to helpers enlarging composition
RonShakutai Nov 23, 2025
974eff6
composition
RonShakutai Nov 24, 2025
5039eb2
Refactor entity mapper and langextract utilities for improved clarity…
RonShakutai Nov 24, 2025
f78ec8e
Refactor tests to use get_langextract_module for mocking LangExtract …
RonShakutai Nov 24, 2025
b27b09c
Refactor langextract utilities to improve clarity and error handling;…
RonShakutai Nov 24, 2025
bb58002
Refactor LLM utilities by simplifying docstrings and consolidating im…
RonShakutai Nov 24, 2025
08ef556
Update error message for missing Jinja2 installation to include poetr…
RonShakutai Nov 24, 2025
a2d4db7
Add Ollama recognizer configuration and tests for YAML integration
RonShakutai Nov 25, 2025
09a02f4
Refactor docstrings in OllamaLangExtractRecognizer for improved clari…
RonShakutai Nov 25, 2025
da16ede
Enhance OllamaLangExtractRecognizer initialization docstring to clari…
RonShakutai Nov 25, 2025
6f72919
Merge branch 'main' into feature-langextract
RonShakutai Nov 25, 2025
7185449
pr comments
RonShakutai Nov 25, 2025
d45ff0c
Refactor OllamaLangExtractRecognizer to streamline config path handli…
RonShakutai Nov 25, 2025
6b027bb
Merge branch 'feature-langextract' of https://github.com/microsoft/pr…
RonShakutai Nov 25, 2025
d5e05e0
Refactor Ollama recognizer test to improve clarity and enhance entity…
RonShakutai Nov 25, 2025
651ecce
Refactor tests for Ollama recognizer and LMRecognizer to improve exce…
RonShakutai Nov 25, 2025
95f5a6a
Update config path for Ollama recognizer in test configuration
RonShakutai Nov 25, 2025
85f1c7a
Update config paths for Ollama recognizer and add test configuration …
RonShakutai Nov 25, 2025
f87ae78
Remove test configuration for Ollama LangExtract
RonShakutai Nov 25, 2025
8158cfd
Remove test configuration for Ollama LangExtract recognizer
RonShakutai Nov 25, 2025
025a5b0
Fix formatting in resolve_config_path function for improved readability
RonShakutai Nov 25, 2025
0b6c62f
Enable UsLangExtractRecognizer and update its config path
RonShakutai Nov 25, 2025
1fe8a2b
change all configs to use gemma3:1b
RonShakutai Nov 25, 2025
f492cf5
Disable Ollama LangExtract recognizer and update its configuration path
RonShakutai Nov 25, 2025
94320e0
Update langextract configuration paths to use absolute paths for prom…
RonShakutai Nov 25, 2025
4024901
Remove test script for Ollama recognizer configuration loading
RonShakutai Nov 25, 2025
8397d43
Refactor config loading in examples and prompt loaders to use resolve…
RonShakutai Nov 25, 2025
2b42d6f
Refactor parameter description in load_yaml_examples and clean up imp…
RonShakutai Nov 25, 2025
6af3bb9
Update langextract paths to use repo-root-relative paths in tests and…
RonShakutai Nov 25, 2025
4dfe7fd
Enhance documentation for Ollama setup and improve __init__.py import…
RonShakutai Nov 26, 2025
54418fb
code review changes
RonShakutai Nov 26, 2025
dcb9f8c
Merge branch 'main' of https://github.com/microsoft/presidio into fea…
RonShakutai Nov 30, 2025
e682f60
pr comments & align to main
RonShakutai Nov 30, 2025
c43dd92
Merge branch 'main' into feature-langextract
RonShakutai Nov 30, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,11 @@ jobs:
- name: Wait for services to be ready
run: sleep 60

- name: Download Ollama model for Ollama LangExtract recognizer tests
run: |
docker exec $(docker ps -qf "name=ollama") ollama pull qwen2.5:1.5b
docker exec $(docker ps -qf "name=ollama") ollama run qwen2.5:1.5b

- name: Run E2E tests
working-directory: e2e-tests
run: |
Expand Down Expand Up @@ -431,6 +436,11 @@ jobs:
- name: Wait for services to be ready
run: sleep 60

- name: Download Ollama model for Ollama LangExtract recognizer tests
run: |
docker exec $(docker ps -qf "name=ollama") ollama pull qwen2.5:1.5b
docker exec $(docker ps -qf "name=ollama") ollama run qwen2.5:1.5b

- name: Run E2E tests
working-directory: e2e-tests
run: |
Expand Down
23 changes: 23 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,18 @@
services:
ollama:
image: ollama/ollama:latest
ports:
- "127.0.0.1:11434:11434" # or "127.0.0.1:11435:11434"
volumes:
- ollama-data:/root/.ollama
environment:
- OLLAMA_HOST=0.0.0.0
healthcheck:
test: ["CMD", "ollama", "list"]
interval: 10s
timeout: 10s
retries: 30 # ~5 minutes total
start_period: 60s
presidio-anonymizer:
image: ${REGISTRY_NAME}/${IMAGE_PREFIX}presidio-anonymizer${TAG}
build:
Expand All @@ -9,6 +23,7 @@ services:
- PORT=5001
ports:
- "5001:5001"

presidio-analyzer:
image: ${REGISTRY_NAME}/${IMAGE_PREFIX}presidio-analyzer${TAG}
build:
Expand All @@ -17,8 +32,13 @@ services:
- type=registry,ref=${REGISTRY_NAME}/${IMAGE_PREFIX}presidio-analyzer:latest
environment:
- PORT=5001
- OLLAMA_HOST=http://ollama:11434
ports:
- "5002:5001"
depends_on:
ollama:
condition: service_healthy

presidio-image-redactor:
image: ${REGISTRY_NAME}/${IMAGE_PREFIX}presidio-image-redactor${TAG}
build:
Expand All @@ -29,3 +49,6 @@ services:
- PORT=5001
ports:
- "5003:5001"

volumes:
ollama-data:
7 changes: 7 additions & 0 deletions docs/analyzer/adding_recognizers.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,13 @@ On how to integrate Presidio with AHDS De-Identification Protected Health Inform
and a sample for a ADHS Remote Recognizer, refer to the
[AHDS de-Identification Integration document](../samples/python/ahds/index.md).

### Language Model-based PII/PHI detection recognizer

Presidio supports language model-based entity detection using LLMs and SLMs for flexible PII/PHI recognition.

The current implementation uses LangExtract with Ollama (local models). For full setup instructions and usage examples,
see the [Language Model-based PII/PHI Detection guide](../samples/python/langextract/index.md).

### Creating ad-hoc recognizers

In addition to recognizers in code, it is possible to create ad-hoc recognizers via the Presidio Analyzer API for regex and deny-list based logic.
Expand Down
1 change: 1 addition & 0 deletions docs/samples/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
| Usage | Text | Python file | [Azure AI Language as a Remote Recognizer](python/text_analytics/index.md) |
| Usage | Text | Python file | [Azure Health Data Services de-identification Service as a Remote Recognizer](python/ahds/index.md) |
| Usage | Text | Python file | [AHDS Surrogate Example](python/ahds/example_ahds_surrogate.py) |
| Usage | Text | Python file | [Language Model-based PII/PHI Detection using LangExtract](python/langextract/index.md) |
| Usage | CSV | Python file | [Analyze and Anonymize CSV file](https://github.com/microsoft/presidio/blob/main/docs/samples/python/process_csv_file.py) |
| Usage | Text | Python | [Using Flair as an external PII model](https://github.com/microsoft/presidio/blob/main/docs/samples/python/flair_recognizer.py)|
| Usage | Text | Python file | [Using Span Marker as an external PII model](https://github.com/microsoft/presidio/blob/main/docs/samples/python/span_marker_recognizer.py)|
Expand Down
181 changes: 181 additions & 0 deletions docs/samples/python/langextract/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
# Language Model-based PII/PHI Detection (Experimental Feature)

## Introduction

Presidio supports language model-based PII/PHI detection for flexible entity recognition using language models (LLMs, SLMs, etc.). This approach enables detection of both:
- **PII (Personally Identifiable Information)**: Names, emails, phone numbers, SSN, credit cards, etc.
- **PHI (Protected Health Information)**: Medical records, health identifiers, etc.

(The default approach uses [LangExtract](https://github.com/google/langextract) under the hood to integrate with language model providers.)

## Entity Detection Capabilities

Unlike pattern-based recognizers, language model-based detection is flexible and depends on:

- The language model being used
- The prompt description provided
- The few-shot examples configured

The default configuration includes examples for common PII/PHI entities such as PERSON, EMAIL_ADDRESS, PHONE_NUMBER, US_SSN, CREDIT_CARD, MEDICAL_LICENSE, and more.
**You can customize the prompts and examples to detect any entity types relevant to your use case**.

For the default entity mappings and examples, see the [default configuration](https://github.com/microsoft/presidio/blob/main/presidio-analyzer/presidio_analyzer/conf/langextract_config_ollama.yaml).

## Supported Language Model Providers

Presidio supports the following language model providers through LangExtract:

1. **Ollama** - Local language model deployment (open-source models like Gemma, Llama, etc.)
2. **Azure OpenAI** - _Documentation coming soon_

## Language Model-based Recognizer Implementation

Presidio provides a hierarchy of recognizers for language model-based PII/PHI detection:

- **`LMRecognizer`**: Abstract base class for all language model recognizers (LLMs, SLMs, etc.)
- **`LangExtractRecognizer`**: Abstract base class for LangExtract library integration (model-agnostic)
- **`OllamaLangExtractRecognizer`**: Concrete implementation for Ollama local language models
- **`AzureOpenAILangExtractRecognizer`**: _Documentation coming soon_

[OllamaLangExtractRecognizer implementation](https://github.com/microsoft/presidio/blob/main/presidio-analyzer/presidio_analyzer/predefined_recognizers/third_party/ollama_langextract_recognizer.py)

---

## Using Ollama (Local Models)

### Prerequisites

1. **Install Presidio with LangExtract support**:
```sh
pip install presidio-analyzer[langextract]
```

2. **Set up Ollama**

You have two options to set up Ollama:

**Option 1: Docker Compose** (recommended for CPU)

This option requires Docker to be installed on your system.

**Where to run:** From the root presidio directory (where `docker-compose.yml` is located)

```bash
docker compose up -d ollama
docker exec presidio-ollama-1 ollama pull qwen2.5:1.5b
docker exec presidio-ollama-1 ollama list
```

**Platform differences:**
- **Linux/Mac**: Commands above work as-is
- **Windows**: Use PowerShell or CMD, commands are the same

If you don't have Docker installed:
- Linux: Follow [Docker installation guide](https://docs.docker.com/engine/install/)
- Mac: Install [Docker Desktop for Mac](https://docs.docker.com/desktop/install/mac-install/)
- Windows: Install [Docker Desktop for Windows](https://docs.docker.com/desktop/install/windows-install/)

**Option 2: Native installation** (recommended for GPU acceleration)

Follow the [official LangExtract Ollama guide](https://github.com/google/langextract?tab=readme-ov-file#using-local-llms-with-ollama).

After installation, pull and run the model:
```bash
ollama pull qwen2.5:1.5b
ollama run qwen2.5:1.5b
```

> This option provides better performance with GPU acceleration (e.g., on Mac with Metal Performance Shaders or systems with NVIDIA GPUs).
> The model must be pulled and run before using the recognizer. The default model is `qwen2.5:1.5b`.

3. **Configuration** (optional): Create your own `ollama_config.yaml` or use the [default configuration](https://github.com/microsoft/presidio/blob/main//presidio-analyzer/presidio_analyzer/conf/langextract_config_ollama.yaml)

### Usage

**Option 1: Enable in configuration file**

Enable the recognizer in [`default_recognizers.yaml`](https://github.com/microsoft/presidio/blob/main/presidio-analyzer/presidio_analyzer/conf/default_recognizers.yaml):
```yaml
- name: OllamaLangExtractRecognizer
enabled: true # Change from false to true
```

Then load the analyzer using this modified configuration file:

```python
from presidio_analyzer import AnalyzerEngine
from presidio_analyzer.recognizer_registry import RecognizerRegistryProvider

# Point to your modified default_recognizers.yaml with Ollama enabled
provider = RecognizerRegistryProvider(
conf_file="/path/to/your/modified/default_recognizers.yaml"
)
registry = provider.create_recognizer_registry()

# Create analyzer with the registry that includes Ollama recognizer
analyzer = AnalyzerEngine(registry=registry, supported_languages=["en"])

# Analyze text - Ollama recognizer will participate in detection
results = analyzer.analyze(text="My email is john.doe@example.com", language="en")
```

**Option 2: Add programmatically**

```python
from presidio_analyzer import AnalyzerEngine
from presidio_analyzer.predefined_recognizers.third_party.ollama_langextract_recognizer import OllamaLangExtractRecognizer

analyzer = AnalyzerEngine()
analyzer.registry.add_recognizer(OllamaLangExtractRecognizer())

results = analyzer.analyze(text="My email is john.doe@example.com", language="en")
```

!!! note "Note"
The recognizer is disabled by default in `default_recognizers.yaml` to avoid requiring Ollama for basic Presidio usage. Enable it when you have Ollama set up and running.

### Custom Configuration

To use a custom configuration file:

```python
analyzer.registry.add_recognizer(
OllamaLangExtractRecognizer(config_path="/path/to/custom_config.yaml")
)
```

### Configuration Options

The `langextract_config_ollama.yaml` file supports the following options:

- **`model_id`**: The Ollama model to use (default: `"qwen2.5:1.5b"`)
- **`model_url`**: Ollama server URL (default: `"http://localhost:11434"`)
- **`temperature`**: Model temperature for generation (default: `null` for model default)
- **`supported_entities`**: PII/PHI entity types to detect
- **`entity_mappings`**: Map LangExtract entity classes to Presidio entity names
- **`min_score`**: Minimum confidence score (default: `0.5`)

See the [configuration file](https://github.com/microsoft/presidio/blob/main/presidio-analyzer/presidio_analyzer/conf/ollama_config.yaml) for all options.

## Troubleshooting

**ConnectionError: "Ollama server not reachable"**
- Ensure Ollama is running: `docker ps` or check `http://localhost:11434`
- Verify the `model_url` in your configuration matches your Ollama server address

**RuntimeError: "Model 'qwen2.5:1.5b' not found"**
- Pull the model: `docker exec -it presidio-ollama-1 ollama pull qwen2.5:1.5b`
- Or for manual setup: `ollama pull qwen2.5:1.5b`
- Verify the model name matches the `model_id` in your configuration

---

## Using Azure OpenAI (Cloud Models)

_Documentation coming soon_

---

## Choosing Between Ollama and Azure OpenAI

_Comparison documentation coming soon_
4 changes: 2 additions & 2 deletions e2e-tests/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
requests>=2.32.4
pytest
file:../presidio-analyzer
file:../presidio-anonymizer
-e ../presidio-analyzer[langextract]
-e ../presidio-anonymizer
51 changes: 51 additions & 0 deletions e2e-tests/resources/ollama_test_config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# LMRecognizer base configuration
lm_recognizer:
supported_entities:
- PERSON
- LOCATION
- ORGANIZATION
- PHONE_NUMBER
- EMAIL_ADDRESS
- DATE_TIME
- US_SSN
- CREDIT_CARD
- MEDICAL_LICENSE
- IP_ADDRESS
- URL
- IBAN_CODE

labels_to_ignore:
- payment_status

enable_generic_consolidation: true
min_score: 0.5

langextract:
prompt_file: presidio-analyzer/presidio_analyzer/conf/langextract_prompts/default_pii_phi_prompt.j2
examples_file: presidio-analyzer/presidio_analyzer/conf/langextract_prompts/default_pii_phi_examples.yaml

entity_mappings:
person: PERSON
full_name: PERSON
name_first: PERSON
name_last: PERSON
name_middle: PERSON
location: LOCATION
address: LOCATION
organization: ORGANIZATION
phone: PHONE_NUMBER
phone_number: PHONE_NUMBER
email: EMAIL_ADDRESS
date: DATE_TIME
ssn: US_SSN
identification_number: US_SSN
credit_card: CREDIT_CARD
medical_record: MEDICAL_LICENSE
ip_address: IP_ADDRESS
url: URL
iban: IBAN_CODE

model:
model_id: qwen2.5:1.5b
model_url: http://localhost:11434
temperature: 0.0
Loading
Loading