top of page
uintent company logo

AI & UXR

Prompt, Project, or Skill? Which AI Tool Truly Accelerates Your UX Research

6

MIN

May 14, 2026

Third recruitment email this month. Different client, different target audience, different country. But the structure? Identical. Study type, timeframe, screening criteria, incentive note, NDA clause. Anyone working in UX research knows this déjà vu: digging out the old email, swapping out variables, and wondering why this is still manual work.


The honest answer: Because most researchers don’t know that there’s a third option beyond copy-paste and long prompts. Claude now offers three levels of tools: the individual prompt, the project with a meta-prompt, and the skill. Each solves a different problem. None replaces the other.


The problem isn’t that there aren’t enough UX AI tools. The problem is a lack of clarity. This article sorts that out for you. You’ll learn what distinguishes prompts, projects, and skills; where each tool fits in the UX research process; and how to decide where to start.


📌 Key takeaways

  • Claude offers three tool levels: single prompt, project with meta-prompt, and skill. Each has its place.

  • Skills are suitable for recurring, structured tasks with clear input and output.

  • Projects are suitable for iterative workflows that require dialogue and follow-up questions.

  • In everyday UX research, there are dozens of tasks that run faster as a skill than in a chat.

  • Skills have been available since October 2025 and have been an open standard since December 2025.

  • No coding knowledge required. A skill is a Markdown file containing instructions.

  • The combination of skills and MCP connectors will continue to transform UX workflows.


Where do Claude Skills come from, and what do they do?

Short answer: Skills have been part of Claude for longer than most people realize. The document creation features for Word, Excel, PowerPoint, and PDF, which arrived in September 2025, were already running entirely via Skills in the background. Only nobody knew it.


On October 16, 2025, Anthropic officially launched the feature. Two months later, on December 18, 2025, the decisive step followed: Anthropic published the Agent Skills specification as an open standard on agentskills.io. Microsoft adopted the standard for VS Code and GitHub, as did Cursor, Goose, Amp, and OpenCode.


What does this mean technically? A skill is a folder containing a SKILL.md file. This file includes a name, a description, and instructions in Markdown. Upon startup, Claude reads only the name and description of each installed skill. This consumes about 100 tokens per skill. Only when a task matches a skill does Claude load the full instructions. This principle is called Progressive Disclosure and explains why dozens of skills can be installed in parallel without the context window becoming overloaded.


Skills are available on the Free, Pro, Max, Team, and Enterprise plans. Prerequisite: Code Execution must be enabled.


Prompt, Project, or Skill: When Do I Need Which One?

A question surprisingly few researchers ask themselves. Most handle everything in one-on-one chats until they realize they’re constantly repeating context, retyping instructions, and getting results that are slightly different each time. Three tools, three use cases.


A single prompt means: one question, one answer, no memory of it afterward. You open a chat, ask your question, get your result. No saved context, no reuse. This is perfect for one-off tasks. “Rewrite these three usability findings.” Or: “What are the differences between card sorting and tree testing?” No setup required, ready to use immediately.


A project with a meta-prompt is a permanent workspace. You store system instructions, upload reference files, and define rules for tone, structure, and quality. Every new chat within this project knows the full context. The workflow thrives on back-and-forth: You provide input, Claude delivers a draft, you correct it, Claude revises it. A typical example: blog production. The entire content process, from topic exploration to the LinkedIn post, runs within a project whose Meta-Prompt defines the voice, formatting rules, and quality standards. This also applies to proposal creation involving extensive coordination or research concepts that require multiple iterations.


A skill is the opposite of iteration. Input in, output out. You type your request into any chat, Claude automatically recognizes that a suitable skill exists, loads it, and delivers the result. Or you can call it up directly via a slash command. No follow-up questions, no dialogue, no project context required. Instead: consistent results, every time.



Prompt

Project (Meta-Prompt)

Skill

Reusable

No

Yes, within the project

Yes, globally in every chat

Interaction

One-time

Dialogue and iteration

Input → Output

Context

Only in the current chat

Project files + instructions

SKILL.md + resources

Activation

Manual

Open project

Automatically or /slash

UX Example

“Rephrase this finding”

Blog production, proposal coordination

Recruitment email, screener


The rule of thumb: If you perform a task more than once a month and it has the same basic structure every time, it’s a candidate for a skill. If you need to ask follow-up questions and iterate during the process, it belongs in a project. If it’s a one-time thing, a prompt is sufficient.


Where in the UX research process are skills worthwhile?

More places than most people realize. UX AI doesn’t have its greatest impact on that one big analysis, but on the many small tasks that run through every project. Here are the typical phases with specific tasks that work as skills.


Before the study: Everything that a briefing provides as input

Preparing a study consists of tasks that hardly change from project to project. The content changes, but the structure remains the same. This is exactly the sweet spot for skills.


Recruitment emails, for example. A skill that distills a briefing into a finished email to the recruitment agency: study type, target group, screening criteria, timeframe, NDA notice. Important for such skills: consciously decide what the skill should not do. Suggesting an incentive amount, for example, does not belong in the skill because that is a budget decision that remains with the human.


The same logic applies to screener questionnaires. Generate a screener with filter logic, knockout questions, and quotas from the target group description in the briefing. Or for discussion guides: warm-up, topic blocks based on research questions, tasks, cool-down, with time estimates per block. Or for consent forms and privacy policy texts, where only the variables change (study type, target group, setting, recording), but the GDPR requirements remain consistent.


After the study: From raw data to report

The analysis phase is full of tasks that must be executed with technical precision but do not require a flash of inspiration. Transforming raw observations into clear findings according to a fixed framework: What was observed, among how many participants, why it likely happened, and what impact it has. This is precisely where a skill prevents the notorious “That’s an observation, not a finding” problem.


Severity ratings based on the Nielsen scale for a list of issues. Extracting quotes from transcripts, clustering them by theme, and entering them into a table with participant IDs. Condensing executive summaries from sprawling reports. Drafting stakeholder emails after the study concludes: What was the goal, what did we learn, what do we recommend, what are the next steps.


All of this always follows the same choreography. And most researchers today write all of this from scratch every single time.


Project overview: Briefing check and method selection

A briefing check skill takes a client briefing and runs it through a checklist: Are the research questions formulated with sufficient precision? Is the target audience defined in measurable terms? Does the proposed method fit the research question? The result is a list of open points for the kickoff call. It’s no substitute for the conversation, but it’s preparation that takes five minutes instead of thirty.


Method recommendations for early consulting meetings work similarly. The client describes their problem, and the skill suggests suitable methods with justification, typical setup, and effort estimates.


What makes a task a good candidate for a skill?

Not everything that regularly lands on your desk is suitable for a skill. The task needs four characteristics.


It is recurring. At least once a month, preferably more often. One-time special cases are prompts, not skills.


It is structured. There is a recognizable pattern: same sections, same logic, same quality criteria. The variables change, but the framework does not.


It requires few follow-up questions. If you have to make three context-dependent decisions every time you run it, a project is a better fit.


And the output is clearly defined. An email, a questionnaire, a table, a rating. No open-ended result that only takes shape through discussion.


Where skills don’t fit: interpretation of research results. Choosing methods for nebulous questions. Empathetic facilitation. Strategic consulting.


Anything where judgment makes the difference cannot be delegated. And that’s a good thing. Skills clear away the routine so that more capacity remains for precisely this work.


Where is the journey headed?

Three developments that should be on the radar for UX research.


Platform independence. Skills have been an open standard since December 2025. A skill built for Claude will, in principle, also work in Cursor, Codex, or Gemini CLI . Invest once, be locked in nowhere.


Skills that train themselves. Anthropic aims to eventually enable agents to independently create, edit, and evaluate skills. Specifically, this could mean: A finding formulator becomes more precise after many iterations because it learns which phrasing patterns work in the given context. As of May 2026, this is still a long way off, but the direction is unmistakable.


Skills plus MCP Connectors. Skills provide the knowledge (how to complete a task), while MCP Connectors provide the link to external tools such as Jira, Confluence, or Dovetail. Anthropic aims to integrate both systems more closely. For UX research, this would mean: a finding skill that knows what a clean finding should look like, paired with an MCP connector that pushes the result directly into the research repository. No more copy-pasting between tools.


Conclusion: Start with the task that annoys you the most

The tools are here. Prompts for one-off questions, projects for iterative work, skills for recurring routines. The question isn’t whether UX AI is worth it for your daily research. The question is where you’ll start.


The pragmatic recommendation: Pick the one task that comes up at least once a month and where you catch yourself thinking every time, “I’ve written this a hundred times already.” Build a skill for it. It takes half an hour. After that, you’ll know if the approach works for you.


Whether it’s the recruitment email, the screener, the briefing check, or the stakeholder email after the study: it doesn’t matter. The effect is the same. Less routine, more capacity for the work that really counts.


FAQ:

Do I need coding knowledge to build a skill? 

No. A SKILL.md file is Markdown with a short YAML header for the name and description. You write the rest in natural language. For more complex skills, you can add scripts, but you don’t have to.


How much does it cost? 

Skills are part of your existing Claude plan. No additional costs. The prerequisite is that Code Execution is enabled in the settings. Skills are available on all plans, from Free to Enterprise.


How do Skills differ from a good prompt? 

A prompt is one-time. You type it, get your result, and the next time you type it again. A skill is reusable, triggers automatically when Claude recognizes the appropriate task, and delivers consistent results because the instructions are hard-coded.


Can I share skills within a team? 

On Team and Enterprise plans, yes. You can share skills with colleagues or the entire organization. On other plans, custom skills are tied to your account.


Do skills only work in Claude? 

No. Since December 2025, the Agent Skills specification has been an open standard. Tools like Cursor, VS Code, Codex, and Gemini CLI support the format.



💌 Not enough yet? Read on in our newsletter. 

Comes' out four times a year. Sticks' with you longer. https://www.uintent.com/de/newsletter



Futuristic illustration of three floating AI tools: a glowing spark, a transparent workspace cube with layered documents, and a crystalline gear, connected by golden lines against a deep navy background.

Prompt, Project, or Skill? Which AI Tool Truly Accelerates Your UX Research

AI & UXR

Glowing futuristic shield made of UI elements repels digital threats in dark space.

UX Research As Risk Management: Why We Finally Need To Change Our Language

HOW-TO, UX, UX QUALITY

Person at desk between chaotic and structured data streams, central light focus

UX & AI: The Best Newsletters and Podcasts – My Personal Selection

AI & UXR

Futuristic digital illustration: A glowing golden certification seal floating against a deep navy background, surrounded by AR interface fragments and a faint headset silhouette – symbolizing trust and validation in medical technology.

Trust, but Verified: Why Medical Certification Matters for AR, VR, and Mr in Medtech

HEALTHCARE, HUMAN-CENTERED DESIGN, UX

Floating semi-transparent AR interface with minimal medical data and anatomical visuals, glowing in cyan and gold against a dark futuristic background.

Making the Magic Usable: Why Usability Engineering Matters for AR, VR, and MR in Medtech

HEALTHCARE, MHEALTH

A futuristic, symbolic illustration shows a person standing on a glowing bridge between two worlds: on the left, a warmly lit hospital room with a bed and medical equipment; on the right, an immersive digital space featuring a holographic human body with organs glowing in cyan and orange tones. Both sides are connected by flowing streams of light, set against a deep navy blue background with soft violet transitions.

Reality, Reimagined: How AR, VR, and Mr Are Finding Their Way Into Medtech

DIGITISATION, HEALTHCARE

A glowing golden trophy floats above a gap, while small figures below work on user research and wireframes, untouched by its light.

Understanding UX AI Benchmarks: What HLE and METR Really Tell Us About AI Tools

AI & UXR

Futuristic digital illustration on a deep navy background: a human hand holding a warm glowing pencil and a cyan-lit robotic hand both reach toward a radiant central data cluster. Surrounded by stacked documents and a network of connected nodes, the scene symbolizes collaboration between human interpretation and digital information processing.

NotebookLM in UX Research: An Honest Assessment of a Specialized AI Tool

AI & UXR, HOW-TO, LLM

Futuristic glowing cylinder divided into segments by golden barriers.

Introducing Gated Salami Prompting: Why You Should Slice Complex LLM Tasks Into Smaller Pieces

CHAT GPT, HOW-TO, LLM, PROMPTS

Futuristic square illustration on deep navy background: a glowing golden speech bubble dissolves into particles that partially reassemble incorrectly, surrounded by energy arcs, luminous nodes, and a stylized digital head—symbolizing LLM hallucinations.

Fictitious Quotes, Lost Nuances: The Hallucination Problem in Qualitative Analysis With Llms

CHAT GPT, HOW-TO, LLM, OPEN AI, PROMPTS, TOKEN, UX METHODS

Surreal futuristic illustration of a glowing digital head with data streams, charts, and evaluation symbols representing AI evaluation methodology.

How do we know that our prompt is doing a good job? Why UX research needs an evaluation methodology for AI-based analysis

AI WRITING, DIGITISATION, HOW-TO, PROMPTS

A surreal, futuristic illustration featuring a translucent human profile with a glowing brain connected by flowing data streams to a hovering, golden crystal.

Prompt Psychology Exposed: Why “Tipping” ChatGPT Sometimes Works

CHAT GPT, HOW-TO, LLM, UX

Surreal, futuristic illustration of a person seen from behind standing in a glowing digital cityscape.

System Prompts in UX Research: What You Need to Know About Invisible AI Control

PROMPTS, RESEARCH, UX, UX INSIGHTS

Abstract futuristic illustration of a person, various videos, and notes.

Summarizing YouTube Videos With AI: Three Tools Put to the Test in UX Research

LLM, UX, HOW-TO

two folded hands holding a growing plant

UX For a Better World: We Are Giving Away a UX Research Project to Non-profit Organisations and Sustainable Companies!

UX INSIGHTS, UX FOR GOOD, TRENDS, RESEARCH

Abstract futuristic illustration of a person facing a glowing tower of documents and flowing data streams.

AI Tools UX Research: How Do These Tools Handle Large Documents?

LLM, CHAT GPT, HOW-TO

Illustration of Donald Trump with raised hand in front of an abstract digital background suggesting speech bubbles and data structures.

Donald Trump Prompt: How Provocative AI Prompts Affect UX Budgets

AI & UXR, PROMPTS, STAKEHOLDER MANAGEMENT

Driver's point of view looking at a winding country road surrounded by green vegetation. The steering wheel, dashboard and rear-view mirror are visible in the foreground.

The Final Hurdle: How Unsafe Automation Undermines Trust in Adas

AUTOMATION, AUTOMOTIVE UX, AUTONOMOUS DRIVING, GAMIFICATION, TRENDS

Illustration of a person standing at a fork in the road with two equal paths.

Will AI Replace UX Jobs? What a Study of 200,000 AI Conversations Really Shows

HUMAN VS AI, RESEARCH, AI & UXR

Close-up of a premium tweeter speaker in a car dashboard with perforated metal surface.

The Passenger Who Always Listens: Why We Are Reluctant to Trust Our Cars When They Talk

AUTOMOTIVE UX, VOICE ASSISTANTS

Related Articles you might enjoy

AUTHOR

Tara Bosenick

Tara has been active as a UX specialist since 1999 and has helped to establish and shape the industry in Germany on the agency side. She specialises in the development of new UX methods, the quantification of UX and the introduction of UX in companies.


At the same time, she has always been interested in developing a corporate culture in her companies that is as ‘cool’ as possible, in which fun, performance, team spirit and customer success are interlinked. She has therefore been supporting managers and companies on the path to more New Work / agility and a better employee experience for several years.


She is one of the leading voices in the UX, CX and Employee Experience industry.

bottom of page