ποΈ Week 04 - AI-Assisted Analysis and Research Workflows
This week adds the main 2026 update to math camp: how to use AI tools in a way that actually improves your fall-semester work. We will treat AI as a collaborator for code drafting, exploratory analysis, structured extraction, and interpretation support, while keeping reproducibility and verification at the center of the workflow.
Guiding Questions
You do not need to answer all of them. They are here to guide your reading and reflection.
- Which parts of a policy-data workflow are genuinely faster with AI?
- What kinds of mistakes do AI tools make in code, interpretation, and data extraction?
- How can you preserve an audit trail when AI materially shapes your analysis?
- When should you ask for free-form help, and when should you constrain the output into a checklist, table, or schema?
2026 Lesson Notes
Use AI for bounded tasks
The most useful pattern is to ask AI for something narrow and testable: a missingness check, a plotting function, a set of grouped summaries, a debugging explanation, or a draft Quarto section. Avoid asking for a whole project βdone for you.β
Verification is now part of the workflow
In the fall, the strongest students will not be the ones who use AI the most. They will be the ones who know how to verify AI outputs quickly and systematically. That means rerunning code, checking intermediate objects, comparing claims to actual model output, and documenting what changed.
AI should strengthen reproducibility, not weaken it
If AI helps you write code or interpretation, the final result should still live in a transparent artifact: an R script, a Quarto document, or a notebook with visible transformations, outputs, and reasoning.
Readings
Suggested Session Flow
- Prompting for data exploration
- Turning AI output into reproducible code
- Using structured outputs for extraction and classification
- Auditing an AI-assisted workflow
- Comparing when Codex and Claude Code are helpful versus when plain R or Quarto is enough