
AI & UXR
Prompt, Project, or Skill? Which AI Tool Truly Accelerates Your UX Research
6
MIN
May 14, 2026
Third recruitment email this month. Different client, different target audience, different country. But the structure? Identical. Study type, timeframe, screening criteria, incentive note, NDA clause. Anyone working in UX research knows this déjà vu: digging out the old email, swapping out variables, and wondering why this is still manual work.
The honest answer: Because most researchers don’t know that there’s a third option beyond copy-paste and long prompts. Claude now offers three levels of tools: the individual prompt, the project with a meta-prompt, and the skill. Each solves a different problem. None replaces the other.
The problem isn’t that there aren’t enough UX AI tools. The problem is a lack of clarity. This article sorts that out for you. You’ll learn what distinguishes prompts, projects, and skills; where each tool fits in the UX research process; and how to decide where to start.
📌 Key takeaways
Claude offers three tool levels: single prompt, project with meta-prompt, and skill. Each has its place.
Skills are suitable for recurring, structured tasks with clear input and output.
Projects are suitable for iterative workflows that require dialogue and follow-up questions.
In everyday UX research, there are dozens of tasks that run faster as a skill than in a chat.
Skills have been available since October 2025 and have been an open standard since December 2025.
No coding knowledge required. A skill is a Markdown file containing instructions.
The combination of skills and MCP connectors will continue to transform UX workflows.
Where do Claude Skills come from, and what do they do?
Short answer: Skills have been part of Claude for longer than most people realize. The document creation features for Word, Excel, PowerPoint, and PDF, which arrived in September 2025, were already running entirely via Skills in the background. Only nobody knew it.
On October 16, 2025, Anthropic officially launched the feature. Two months later, on December 18, 2025, the decisive step followed: Anthropic published the Agent Skills specification as an open standard on agentskills.io. Microsoft adopted the standard for VS Code and GitHub, as did Cursor, Goose, Amp, and OpenCode.
What does this mean technically? A skill is a folder containing a SKILL.md file. This file includes a name, a description, and instructions in Markdown. Upon startup, Claude reads only the name and description of each installed skill. This consumes about 100 tokens per skill. Only when a task matches a skill does Claude load the full instructions. This principle is called Progressive Disclosure and explains why dozens of skills can be installed in parallel without the context window becoming overloaded.
Skills are available on the Free, Pro, Max, Team, and Enterprise plans. Prerequisite: Code Execution must be enabled.
Prompt, Project, or Skill: When Do I Need Which One?
A question surprisingly few researchers ask themselves. Most handle everything in one-on-one chats until they realize they’re constantly repeating context, retyping instructions, and getting results that are slightly different each time. Three tools, three use cases.
A single prompt means: one question, one answer, no memory of it afterward. You open a chat, ask your question, get your result. No saved context, no reuse. This is perfect for one-off tasks. “Rewrite these three usability findings.” Or: “What are the differences between card sorting and tree testing?” No setup required, ready to use immediately.
A project with a meta-prompt is a permanent workspace. You store system instructions, upload reference files, and define rules for tone, structure, and quality. Every new chat within this project knows the full context. The workflow thrives on back-and-forth: You provide input, Claude delivers a draft, you correct it, Claude revises it. A typical example: blog production. The entire content process, from topic exploration to the LinkedIn post, runs within a project whose Meta-Prompt defines the voice, formatting rules, and quality standards. This also applies to proposal creation involving extensive coordination or research concepts that require multiple iterations.
A skill is the opposite of iteration. Input in, output out. You type your request into any chat, Claude automatically recognizes that a suitable skill exists, loads it, and delivers the result. Or you can call it up directly via a slash command. No follow-up questions, no dialogue, no project context required. Instead: consistent results, every time.
Prompt | Project (Meta-Prompt) | Skill | |
Reusable | No | Yes, within the project | Yes, globally in every chat |
Interaction | One-time | Dialogue and iteration | Input → Output |
Context | Only in the current chat | Project files + instructions | SKILL.md + resources |
Activation | Manual | Open project | Automatically or /slash |
UX Example | “Rephrase this finding” | Blog production, proposal coordination | Recruitment email, screener |
The rule of thumb: If you perform a task more than once a month and it has the same basic structure every time, it’s a candidate for a skill. If you need to ask follow-up questions and iterate during the process, it belongs in a project. If it’s a one-time thing, a prompt is sufficient.
Where in the UX research process are skills worthwhile?
More places than most people realize. UX AI doesn’t have its greatest impact on that one big analysis, but on the many small tasks that run through every project. Here are the typical phases with specific tasks that work as skills.
Before the study: Everything that a briefing provides as input
Preparing a study consists of tasks that hardly change from project to project. The content changes, but the structure remains the same. This is exactly the sweet spot for skills.
Recruitment emails, for example. A skill that distills a briefing into a finished email to the recruitment agency: study type, target group, screening criteria, timeframe, NDA notice. Important for such skills: consciously decide what the skill should not do. Suggesting an incentive amount, for example, does not belong in the skill because that is a budget decision that remains with the human.
The same logic applies to screener questionnaires. Generate a screener with filter logic, knockout questions, and quotas from the target group description in the briefing. Or for discussion guides: warm-up, topic blocks based on research questions, tasks, cool-down, with time estimates per block. Or for consent forms and privacy policy texts, where only the variables change (study type, target group, setting, recording), but the GDPR requirements remain consistent.
After the study: From raw data to report
The analysis phase is full of tasks that must be executed with technical precision but do not require a flash of inspiration. Transforming raw observations into clear findings according to a fixed framework: What was observed, among how many participants, why it likely happened, and what impact it has. This is precisely where a skill prevents the notorious “That’s an observation, not a finding” problem.
Severity ratings based on the Nielsen scale for a list of issues. Extracting quotes from transcripts, clustering them by theme, and entering them into a table with participant IDs. Condensing executive summaries from sprawling reports. Drafting stakeholder emails after the study concludes: What was the goal, what did we learn, what do we recommend, what are the next steps.
All of this always follows the same choreography. And most researchers today write all of this from scratch every single time.
Project overview: Briefing check and method selection
A briefing check skill takes a client briefing and runs it through a checklist: Are the research questions formulated with sufficient precision? Is the target audience defined in measurable terms? Does the proposed method fit the research question? The result is a list of open points for the kickoff call. It’s no substitute for the conversation, but it’s preparation that takes five minutes instead of thirty.
Method recommendations for early consulting meetings work similarly. The client describes their problem, and the skill suggests suitable methods with justification, typical setup, and effort estimates.
What makes a task a good candidate for a skill?
Not everything that regularly lands on your desk is suitable for a skill. The task needs four characteristics.
It is recurring. At least once a month, preferably more often. One-time special cases are prompts, not skills.
It is structured. There is a recognizable pattern: same sections, same logic, same quality criteria. The variables change, but the framework does not.
It requires few follow-up questions. If you have to make three context-dependent decisions every time you run it, a project is a better fit.
And the output is clearly defined. An email, a questionnaire, a table, a rating. No open-ended result that only takes shape through discussion.
Where skills don’t fit: interpretation of research results. Choosing methods for nebulous questions. Empathetic facilitation. Strategic consulting.
Anything where judgment makes the difference cannot be delegated. And that’s a good thing. Skills clear away the routine so that more capacity remains for precisely this work.
Where is the journey headed?
Three developments that should be on the radar for UX research.
Platform independence. Skills have been an open standard since December 2025. A skill built for Claude will, in principle, also work in Cursor, Codex, or Gemini CLI . Invest once, be locked in nowhere.
Skills that train themselves. Anthropic aims to eventually enable agents to independently create, edit, and evaluate skills. Specifically, this could mean: A finding formulator becomes more precise after many iterations because it learns which phrasing patterns work in the given context. As of May 2026, this is still a long way off, but the direction is unmistakable.
Skills plus MCP Connectors. Skills provide the knowledge (how to complete a task), while MCP Connectors provide the link to external tools such as Jira, Confluence, or Dovetail. Anthropic aims to integrate both systems more closely. For UX research, this would mean: a finding skill that knows what a clean finding should look like, paired with an MCP connector that pushes the result directly into the research repository. No more copy-pasting between tools.
Conclusion: Start with the task that annoys you the most
The tools are here. Prompts for one-off questions, projects for iterative work, skills for recurring routines. The question isn’t whether UX AI is worth it for your daily research. The question is where you’ll start.
The pragmatic recommendation: Pick the one task that comes up at least once a month and where you catch yourself thinking every time, “I’ve written this a hundred times already.” Build a skill for it. It takes half an hour. After that, you’ll know if the approach works for you.
Whether it’s the recruitment email, the screener, the briefing check, or the stakeholder email after the study: it doesn’t matter. The effect is the same. Less routine, more capacity for the work that really counts.
FAQ:
Do I need coding knowledge to build a skill?
No. A SKILL.md file is Markdown with a short YAML header for the name and description. You write the rest in natural language. For more complex skills, you can add scripts, but you don’t have to.
How much does it cost?
Skills are part of your existing Claude plan. No additional costs. The prerequisite is that Code Execution is enabled in the settings. Skills are available on all plans, from Free to Enterprise.
How do Skills differ from a good prompt?
A prompt is one-time. You type it, get your result, and the next time you type it again. A skill is reusable, triggers automatically when Claude recognizes the appropriate task, and delivers consistent results because the instructions are hard-coded.
Can I share skills within a team?
On Team and Enterprise plans, yes. You can share skills with colleagues or the entire organization. On other plans, custom skills are tied to your account.
Do skills only work in Claude?
No. Since December 2025, the Agent Skills specification has been an open standard. Tools like Cursor, VS Code, Codex, and Gemini CLI support the format.
💌 Not enough yet? Read on in our newsletter.
Comes' out four times a year. Sticks' with you longer. https://www.uintent.com/de/newsletter
Related Articles you might enjoy
AUTHOR
Tara Bosenick
Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.
At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.
She is one of the leading voices in the UX, CX and Employee Experience industry.




















