Getting Started
Dashboard Guides
Trading Guides
Reference
Telegram Integration
Need Help?
Join our community for support, updates, and trading discussions.
AI Analysis Troubleshooting
Common issues with AI-powered analysis in ScreenerBot and how to diagnose and resolve them.
Quick Diagnostic Steps
Before You Start
If AI analysis isn't working as expected, follow these steps to diagnose the issue:
- Check the Dashboard: Look at the Events page for AI-related errors
- Check Console Logs: Run ScreenerBot in terminal to see detailed error messages
- Verify Configuration: Ensure your
config.tomlhas correct API keys and settings - Test Provider: Try a simple API call to your provider outside ScreenerBot to verify it works
Common Issues & Solutions
Issue: API Key Invalid or Rejected
Error: "Invalid API key" or "Unauthorized" or "Authentication failed"
Possible Causes:
- •API key was copied incorrectly (missing characters, extra spaces)
- •API key was revoked or expired on the provider's dashboard
- •Using wrong key format for the provider (e.g., using OpenAI key for Groq)
- •Provider account has billing issues or exceeded quota
Solutions:
- Log into provider dashboard and verify the API key is active
- Generate a new API key and update your
config.toml - Ensure no extra spaces before/after the key in config file
- Check billing status on provider dashboard (some require payment method even for free tier)
Issue: Rate Limit Exceeded
Error: "Rate limit exceeded" or "Too many requests" or "429 error"
Possible Causes:
- •Rate limit in config is set higher than provider allows
- •Too many tokens being screened simultaneously
- •Provider has tier-based limits and you hit daily/monthly cap
Solutions:
- Lower
rate_limitin config (e.g., Groq free tier: use 20 RPM instead of 30) - Configure fallback providers:
filtering_providers = ["groq", "deepseek"] - Upgrade to paid tier on provider dashboard for higher limits
- Wait until rate limit window resets (typically 1 minute for RPM limits)
Issue: AI Analysis is Very Slow
Symptom: AI filtering takes 5-30 seconds per token, slowing down screening
Possible Causes:
- •Using slow provider (e.g., OpenAI GPT-4 instead of Groq)
- •Timeout setting is too high, waiting for unresponsive provider
- •Network latency or slow internet connection
- •Using large/complex model (e.g., GPT-4o instead of GPT-4o-mini)
Solutions:
- Switch to Groq (fastest) or DeepSeek for filtering:
filtering_provider = "groq" - Lower timeout:
timeout = 5instead oftimeout = 30 - Use smaller models:
gpt-4o-miniinstead ofgpt-4o - For Ollama (local), use smaller models:
llama3.2instead ofllama3.1:70b
Issue: AI Analysis Not Running
Symptom: Tokens are being filtered but AI analysis doesn't appear to run
Possible Causes:
- •AI filtering not enabled in config:
filtering_enabled = false - •Provider not configured or disabled:
enabled = false - •Tokens failing numerical filters before reaching AI analysis
- •ScreenerBot not restarted after config changes
Solutions:
- Verify
[ai.analysis] filtering_enabled = truein config - Ensure provider is enabled:
[ai.providers.groq] enabled = true - Restart ScreenerBot after config changes (AI config requires restart)
- Check Events dashboard for AI-related logs to confirm it's running
Issue: Request Timeout Errors
Error: "Request timed out" or "Connection timeout"
Possible Causes:
- •Provider is experiencing downtime or high load
- •Network connectivity issues on your end
- •Timeout setting is too low for the provider/model
- •For Ollama: local model is too large for your hardware
Solutions:
- Increase timeout:
timeout = 30instead oftimeout = 5 - Configure fallback providers to handle downtime automatically
- Check provider status page for outages (e.g., status.groq.com)
- For Ollama: use smaller model or allocate more RAM to Ollama
Issue: AI Gives Incorrect Recommendations
Symptom: AI marks good tokens as scams or approves obvious scams
Possible Causes:
- •Using small/weak model (e.g., llama3.2 instead of llama3.1-70b)
- •Model hasn't been trained on crypto/DeFi scam patterns
- •Scam is using novel technique AI hasn't seen before
- •Confidence threshold for auto-blacklist is too low
Solutions:
- Upgrade to larger/better model: GPT-4o-mini, Claude-3.5-Sonnet, or llama-3.1-70b
- Use advisory mode (
filtering_enforce = false) and manually review - Increase auto-blacklist confidence:
auto_blacklist_confidence = 0.9 - Combine AI with RugCheck for better scam detection coverage
Provider-Specific Issues
Groq
Common Issue: Rate limits hit quickly
Free tier: 30 RPM. With multiple tokens screening simultaneously, easy to hit limit.
Solution: Set rate_limit = 20 or configure DeepSeek as fallback.
Ollama (Local)
Common Issue: "Connection refused"
Ollama service not running or wrong base_url.
Solution: Run ollama serve in terminal. Set base_url = "http://localhost:11434" in config.
OpenAI
Common Issue: Expensive costs
GPT-4o can cost $0.01+ per analysis. With 100 tokens/day, that's $1/day.
Solution: Use gpt-4o-mini (10x cheaper) or switch to Groq/DeepSeek for filtering.
DeepSeek
Common Issue: Slower than Groq
DeepSeek has generous quotas but 2-3x slower response time than Groq.
Solution: Use Groq for filtering (speed), DeepSeek for entry/exit analysis (less time-sensitive).
Debugging Tips
- Enable Verbose Logging:Run ScreenerBot with
RUST_LOG=debugenvironment variable to see detailed AI request/response logs. - Test Provider Independently:Use curl or Postman to test API calls to your provider outside ScreenerBot. Confirms if issue is with provider or ScreenerBot config.
- Check Events Dashboard:The Events page in dashboard shows all AI-related events, errors, and analysis results. Useful for diagnosing what AI is doing.
- Restart After Config Changes:AI provider configuration requires a full restart. Changes to analysis features (filtering_enabled, etc.) may hot-reload depending on the setting.
Still Need Help?
Get Support
If you're still experiencing issues after trying these solutions:
- •Join the Discord community for real-time help from other users
- •Check GitHub Issues for known bugs or submit a new issue
- •Review provider documentation for provider-specific issues
- •Include error logs, config (without API keys), and provider name when asking for help