Think specialized, not general
Think of AI agents as your specialized analysts - each one can be configured to focus on specific areas of your business. For example, you might create a “Marketing Assistant” that only has access to marketing data like campaign performance, lead generation, and customer acquisition metrics. This focused approach ensures more accurate, relevant responses and prevents sensitive data from being accessible to the wrong teams. To find out more about how to configure specific access, see Limiting access to specific explores/fields.Document your data thoroughly
Good documentation is crucial for AI to understand your data models and provide meaningful insights. The quality of the results depend on the quality of your metadata and documentation.- Write clear, descriptive names for metrics and dimensions
- Add detailed descriptions to all metrics and dimensions explaining what they represent
- Include example questions in descriptions that AI could answer with the metric
- Use AI hints to provide additional context specifically for AI agents
Using AI hints
AI hints are specialized metadata fields that provide additional context specifically for AI agents. These hints help the AI better understand your data models, business logic, and how to interpret your metrics and dimensions.AI hints are internal metadata used only by AI agents and are not displayed to users in the Lightdash interface. When both AI hints and descriptions are present, AI hints take precedence for AI agent prompts.
Model-level hints
Provide context about the entire table:Dimension-level hints
Explain individual columns:Metric-level hints
Clarify what metrics measure:Using verified answers
Mark generated charts and dashboards as verified answers to help your agent learn from the best examples and provide better responses to similar questions.- Verify diverse examples - Mark answers covering different types of questions and use cases to give your agent a broad knowledge base
- Quality over quantity - Focus on verifying only your best, most accurate responses. A few excellent examples are better than many mediocre ones
- Update regularly - As your data and business needs evolve, review your verified answers and update them to reflect current best practices
Using evaluations
- Encourage all users to provide feedback - Have your team actively use the thumbs-up/thumbs-down feature when interacting with agents. This feedback is crucial for identifying where the agent needs improvement
- Admins: Review feedback regularly - As an admin, set aside time to review thumbs-down responses, understand patterns, and give targeted feedback to your agent through evaluations
-
Build from real interactions - In the AI Agent Admin Panel, filter for thumbs-down threads and add the failed prompts to an evaluation set. Refine the prompt with your expected repsonse until the eval status is
passed. This helps test fixes and prevent regressions on real-world issues your team actually encounters - Run evaluations consistently - Make evaluations a regular practice. Test before deploying changes to agent instructions or your semantic layer, and run periodic checks to ensure your agent maintains quality as it learns and evolves
Writing effective instructions
Think of your instructions as teaching your AI agent about your world. The better you explain your business context and preferences, the more useful and relevant your agent’s responses will be. Focus on four key areas: what your agent should know about your industry, your team’s goals and constraints, how you like data analyzed, and how results should be communicated.What to include
- Industry terminology and key metrics including acronyms your team uses regularly (e.g., “CPM means Cost Per Mille, not cost per mile” or “Our ARR calculations exclude one-time setup fees”)
- Communication style for how results should be presented to your team (e.g., “Keep explanations simple for non-technical stakeholders” or “Always include actionable next steps”)
- Business constraints like regulatory requirements or budget limitations that affect decision-making
- Analysis preferences your team relies on (e.g., “Always compare month-over-month growth” or “Flag any churn rates above 5% as concerning”)
- Context for interpreting your data (e.g., “Our Q4 always shows higher sales due to holiday promotions” or “Weekend traffic is typically 40% lower”)
Good example - Sales Team Agent:
You analyze sales performance for our SaaS company. Focus on MRR, churn, and pipeline health. When MRR growth drops below 10% month-over-month, flag it as concerning. Present insights in simple terms that our sales managers can act on immediately. Always include trend explanations and next steps.
You analyze sales performance for our SaaS company. Focus on MRR, churn, and pipeline health. When MRR growth drops below 10% month-over-month, flag it as concerning. Present insights in simple terms that our sales managers can act on immediately. Always include trend explanations and next steps.
What to avoid
- Contradictory instructions that create confusion about priorities
- Overly complex rules that are hard to follow consistently
- Vague guidance like “be helpful” without explaining what that means for your situation
- Too many different focus areas in one agent, remember to keep each agent focused, there are no limits on the number of agents you can create!
- Restating basic features, don’t tell the AI to “create charts” since it already does that
Poor example - Too vague:
Be helpful and analyze data well. Create good charts and explain things clearly.
Be helpful and analyze data well. Create good charts and explain things clearly.