Skip to content
View conde-fc's full-sized avatar

Block or report conde-fc

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
conde-fc/README.md

Fernando Conde

AI & Automation Strategist researching what AI platforms actually do during your sessions — and building tools to measure it.

Current Work

HAR-Based AI Platform Forensics — Tools and documentation for examining AI platform behavior using standard browser developer tools (F12/DevTools). Covers Claude, ChatGPT, Gemini, Grok, DeepSeek, and Perplexity. Includes per-platform field path references, Python scripts for parsing HAR files and SSE streams, a field classification registry covering 3,919 unique fields, and a redaction toolkit for safe public sharing.

Multi-Turn AI Interaction Research Framework — Measurement infrastructure for evaluating AI assistant behaviors across extended conversations. Built from analysis of 900+ sessions and 21M+ words of longitudinal interaction data across multiple platforms. Defines 95+ validated behavioral mechanisms across 6 pattern categories, 15 clinical distress pathways mapped to established psychological constructs, and a 10-dimension evaluation rubric scoring agency support versus agency drift. Licensed under CC BY-NC 4.0.

What Connects Them

The forensics toolkit captures what platforms are doing at the infrastructure level — system prompts, experiments, model switches, capacity changes. The interaction research framework measures what those dynamics produce at the behavioral level — how multi-turn patterns accumulate to affect user cognition and agency over time. One reveals the architecture. The other measures its effects.

Coming Later This Year

Evaluation frameworks and measurement tools for quantifying AI behavioral patterns at scale — including detection methods for specification failures, silent regressions, and undisclosed limitations across platforms.

Background

I build production automation systems and AI governance tooling. My work spans regulatory analysis, document processing pipelines, and enterprise workflow automation, with a consistent focus on data sovereignty, local processing, and auditable systems.

Connect

LinkedIn

Popular repositories Loading

  1. ai-interaction-safety-specs ai-interaction-safety-specs Public

    Measurement infrastructure for multi-turn AI interaction safety evaluation

    1

  2. ai-transparency-suite ai-transparency-suite Public

    Open-source toolkit for capturing and analyzing undisclosed data collection by AI chat platforms — extracts telemetry ratios, hidden integrations, experiment configs, and PII transmission patterns

    Python 1

  3. har-forensics har-forensics Public

    Tools and documentation for examining what AI platforms do during your sessions using standard browser developer tools (F12/DevTools). Covers Claude, ChatGPT, Gemini, Grok, DeepSeek, and Perplexity.

    Python

  4. conde-fc conde-fc Public

    AI accountability researcher. Building measurement infrastructure for post-deployment agentic AI safety — HAR forensics, behavioral pattern detection, and governance-ready evidence systems.

  5. ai-copyright-authorship-framework ai-copyright-authorship-framework Public

    Evidence-based methodology for determining human vs AI authorship in collaboratively produced code — maps legal requirements to measurable interaction patterns across AI platforms

    Python

  6. agentic-ai-accountability agentic-ai-accountability Public

    Post-deployment behavioral measurement framework for AI agents — traces failures, quantifies preventable waste, maps correction persistence, and produces governance-ready evidence from real product…

    Python