Copilot Studio best practice¶
In this article we will discuss best practices for working with Copilot Studio on topics such as how to handle limitations, debugging and testing of agents and skills.
Best practices: agent design¶
While both skills and system prompts are customizable to meet your specific needs, following certain guidelines will help you create more effective agents:
- Keep your agents focused and purposeful. Rather than creating a single all-purpose agent, develop dedicated ones for specific use cases. For instance, instead of combining plant status monitoring and machine downtime analysis in one agent, create two separate ones, each specialized for its task.
- Complexity is the enemy of reliability. The more intricate your instructions become, the higher the risk of receiving lower-quality responses from your agent.
- Make sure your skills return the relevant data only. Try to aggregate and filter to limit the amount of raw data that needs to be processed by the LLM.
- If you need to process numerical data, create a skill to do that. LLMs are not good at processing numbers.
To maintain optimal performance, work within these practical limits:
- Maintain fewer than 10 skills per agent.
- Keep your system prompt under 500 words, and use clear, straightforward instructions. This is a recommendation, not a hard limit.
- For now, the number of agents that you can create is limited to 20 per tenant.
Best practices: agent instructions¶
Effective LLM agent instructions are the foundation of reliable agents. Unlike human-facing documentation, LLM agents require precise, structured guidance with explicit conditional logic and error handling to perform consistently. When crafting agent instructions, these best practices will help you create instructions that your LLM agents can follow reliably and efficiently.
Consider the following:
- Structured conditional logic and decision trees
- Explicit parameter specifications and data handling
- Comprehensive error states and recovery procedures
- Clear termination conditions and success criteria
- Formatting optimized for LLM parsing and execution
By following these guidelines, it is possible to get the results in a more predictable agent behavior, reduced debugging time, and improved overall system reliability.
You can use any LLM (e.g., ChatGPT, Claude) to optimize your instruction. Here's a suggested prompt:
Analyze the provided instruction and optimize it for an LLM agent by:
- Clarifying ambiguous steps - Identify unclear instructions and make them explicit
- Adding decision logic - Specify what to do when conditions are met/not met
- Defining validation points - Clarify when to check inputs and what constitutes valid responses
- Structuring sequence flow - Make the order of operations crystal clear
- Handling edge cases - Address what happens when things don't go as expected
- Simplifying language - Use clear, direct instructions without unnecessary complexity
Keep the core workflow intact while making it unambiguous for an LLM to follow.
[Original instruction here]
![]() | Assign pre-built or custom skills to the custom agent. |
![]() | Predefined prompts will be shown in the chat above the chat input field. This can be used for typical questions so that the user does not have to type or to give an idea of how to start the conversation with the agent. |
![]() | Add a client, where this agent should be used. Leaving it empty will show it in all integrations. If you want to restrict an agent for the Insights Hub Monitor, add the client monitor. Further clients will be supported in later releases. |
![]() | The summary shows all information in an overview and highlights possible configuration issues. |
- When selecting an agent, the details are shown on the right side, where the user can edit, test or delete the agent.
Debugging and testing¶
You may be wondering:
- How did the agent arrive at a particular response?
- How has the response been generated?
- Which skills and tools did the agent use?
- What data was involved?
You can ask these questions directly in the chat. Open the chat directly in Copilot Studio using the 'Test' button on each agent.
Helpful questions in the chat are:
What tools do you have?- to check if the agent is properly configuredWhat tools did you use?- to analyze which tools the agent used to generate this answerWhat was the tool’s output?– to analyze the output of the tool to determine if the answer is plausible
In Visual Flow Creator, the following nodes may be helpful:
debug– to check e.g. the payload that is forwarded from the agent to the flow (link the debug node to the start node) or from the flow to the agenttrigger– to manually input testing data to validate your flowfunction– to do some JavaScript logic and transform the payloads for your needs



