Skip to content

rischo32/ASIMULATOR

Repository files navigation

ASIMULATOR™

A Non-Authoritative Epistemic System © Based on 4ES and Entropic Uncertainty (humility)

[ Edge_Event_Driven Architecture (EEDA) ]

DOI Epistemic Validation committers.top badge

CI CodeQL Advanced

What is this?

ASIMULATOR™ is a non-authoritative epistemic system. It does not produce answers. It produces structured representations of possible understanding.

Core Idea

«The system does not decide what is true. It shows how truth can be structured — and where it breaks.»


Key Principles

1. No Single Answer

The system never returns one answer.

Every query is mapped into epistemic states, not conclusions.


2. 4ES + QE Output Model

Each response is decomposed into:

  • AA — stable (human + field coherent)
  • AN — plausible but misleading
  • NA — valid but non-intuitive
  • NN — uncertainty / unknown
  • QE — impossibility of coherent resolution

This is not UX.

This is the ontology of meaning representation.


3. Entropic Humility

The system must:

  • never simulate certainty
  • never hide ambiguity
  • never over-compress reality

Uncertainty is always visible.


4. No Hidden Authority

  • no ranking of answers
  • no “best” output
  • no implicit decision

The human chooses.


5. Memory Without Bias

Memory exists, but:

  • it does not decide
  • it does not optimize
  • it does not enforce truth

It only provides context.


6. Learning as Trace

The system does not “learn truth”.

It records:

  • interactions
  • hypotheses
  • structural patterns

No reward. No convergence.


7. Event-Driven Architecture

The system is built as:

input → event → projections → audit → output

No central decision loop.


8. Safety Model

Safety is not filtering.

Safety =

«impossibility of presenting uncertain knowledge as certain.»


Why this matters

Most AI systems fail not because they are wrong, but because they sound certain when they are not.

ASIMULATOR solves this by:

  • exposing uncertainty
  • preventing epistemic collapse
  • removing single-answer authority

Example

Question:

«Can I eat this mushroom?»

Output:

  • AA: Some mushrooms are edible, but identification is critical
  • AN: It might be edible if it looks similar to known types
  • NA: Visual similarity is not a reliable classifier
  • NN: Without expert verification, uncertainty is high
  • QE: Safe consumption cannot be determined from description alone

Status

Experimental. Foundational. Non-agentic.


Philosophy

This system is based on:

  • epistemic field Φ
  • 4ES (Four Epistemic States)
  • QE (Qualitative Epistemic Aporia)

See:

  • PHI_FOUNDATION.md
  • ARCHITECTURE.md

Examples

See "examples/" for epistemic test cases:

  • Mushroom safety (uncertainty + risk)
  • AI truth (illusion vs critique)
  • Unknown concepts (no hallucination)

Run locally:

python tests/epistemic_tests.py


Final Statement

«This system does not tell you what to think. It makes it impossible to hide when you don’t know.»

Ontological Status

This system is NOT VECTAETOS.

It is an experimental system inspired by the VECTAETOS framework, but it violates key ontological constraints:

  • introduces adaptive memory
  • allows learning
  • may influence future states

Therefore it must not be represented as a VECTAETOS system.

It is governed by VNAL-1.0.

About

ASIMULATOR™: A Non-Authoritative Epistemic System Based on 4ES and Entropic Uncertainty

Resources

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors