Author Attribution (AA) Explainability Tool
This demo helps you see inside a deep AA model’s latent style space.
Currently you are inspecting LUAR with pre-defined AA tasks from the HRS dataset
Visualize
Place your AA task with respect to other background authors.
Generate
Describe your investigated authors' writing style via human-readable LLM-generated style features.
- Select a model and a task source (pre-defined or custom)
- Click Load Task & Generate Embeddings to load the task and generate embeddings
- Run Visualization to see the mystery author and candidates in the AA model's latent space
- Zoom into the visualization to select a cluster of background authors
- Pick an LLM feature to highlight in yellow
- Pick a Gram2Vec feature to highlight in blue
- Click Show Combined Spans to compare side-by-side
What am I looking at?
This plot shows the mystery author (★) and three candidate authors (◆)
in the AA model’s latent space.
The grey ● symbols represent the background corpus—real authors with diverse writing styles.
You can zoom in on any region of the plot. The system will analyze the visible authors
in that area and list the most representative writing style features for the zoomed-in region.
Use this to compare your mystery text’s position against nearby writing styles and
investigate which features distinguish it from others.