This dashboard tracks aggregate community compliance with core/AGENTS.md across Drupal core issue queue artifacts. It exists to evaluate a 3-month harm-reduction experiment: does publishing operator rules change observable LLM-prose patterns?
Two tracks, different denominators by design:
##/### headers in <180-word comments, no formula phrasing). Coverage: every artifact (comment, MR note, issue summary, MR description) regardless of author.Stem-prefix matching against this list, after fenced and inline code stripping. Trailing silent e dropped so conjugations match (leverage → matches leveraged, leveraging). No carve-outs.
whether X or Y — span ≤100 chars between keywords, excluding the whether or not idiom (empirically 19% of literal pre-period hits, structurally not the X/Y construction §2 targets).X rather than YI believe …not only X but also Y — span ≤100 chars between keywords.K=10 for per-component, per-topic-tag, and MR-burst slices (component maintainership is publicly identifiable on drupal.org). K=5 elsewhere. Suppressed cells display in the JSON as "value": null, "suppressed": true with the underlying numerator and denominator preserved in the database for operator audit but not exposed in the public artifact.
The dashboard pins the AGENTS.md ruleset to the version in effect at experiment T0. Mid-experiment amendments are visible to contributors but do not change what the dashboard measures — pre/post comparability requires a stable treatment definition.
The phrase-tell list contains source-aware rules: markdown-format tells (**bold**, ## heading, [link](…)) fire only on source='comment' because Drupal.org strips Markdown rendering. GitLab notes (source='note') render Markdown natively, so MD usage is not a tell there. Comparing tell density across artifact types in the same chart reflects both LLM behavior and phrase-list scope; readers should compare each track to its own pre-period baseline rather than across tracks.