Skip to main content

Interpretable language models

Graph-based neural architectures with sparse activation patterns. Extracting semantic triples and tracking token distributions to make model reasoning transparent and auditable.

Latest

all reports