🧪 Tests
☕ Code
🥒 Gherkin
⚠️ Risks
🗄️ Data
INPUT
1 story
MODULES
Both off = fastest (1 LLM call)
Waiting...
Waiting...
Waiting...
Waiting...
Waiting...
Waiting...
No saved analyses
0
Analyses Generated
↑ Active Session
0
Test Cases Created
↑ All Sessions
0
Risks Identified
↑ All Sessions
0s
Total Generation Time
↑ All Sessions
RECENT ACTIVITY
No activity
GHERKIN GENERATOR
🛠️ Ollama Setup
Set up your local LLM in 4 steps. Free, private, unlimited.
1
Install Ollama
Download and install Ollama for your OS. Takes 2 minutes.
→ https://ollama.com/download
2
Download a model
Recommended for QA: llama3 (8B, ~5GB) or qwen2.5-coder (7B, best for code).
ollama pull llama3
ollama pull qwen2.5-coder
3
Start Ollama
In a terminal, start the Ollama server. It runs in the background.
ollama serve
4
Start QA Copilot
Double-click start.bat (Windows) or run ./start.sh (Mac/Linux). The tool opens automatically in your browser.
python3 server.py
🔬 Test Generation
Methodology, frameworks, and output settings
TESTS
Méthodologie
Nombre de cas de tests
CODE
Framework
Package Java par défaut
PERF
Outil de performance
DATA
Mode de génération
📥 Ingestion & RAG
Data sources, knowledge bases, and vector store
ATLASSIAN / JIRA
Base URL e.g. https://your-domain.atlassian.net
Email
API Token blank = keep current
Confluence Space Keys comma-separated
JQL FILTERS for RAG ingestion
Loading…
SWAGGER / OPENAPI
Loading…
WEB SOURCES
Loading…
REPOSITORIES
CODEBASE SOURCES
🔗 Integrations & Tools
Test management, CI/CD, and external services
XRAY CLOUD
Client ID
Client Secret leave blank to keep current
Link Type
Test Type Field optional
Gherkin Field optional
OLLAMA — RAG SERVER
Ollama URL
Model
QDRANT — VECTOR STORE
Qdrant URL
Collection
EMBEDDING MODEL
Model changing model requires re-ingestion of all sources
🔐 User Management
Manage users, roles, and access rights.
USERS
Loading…
🤖 Model Training
Review training pairs, trigger fine-tuning, and swap the active model.
DATASET STATS
— total
— pending
— accepted
— rejected
PAIR REVIEW QUEUE
Loading…
📡 Platform Monitoring
Health, usage metrics, and logs.
Ollama
RAG
Server
MEM: -- MB
CPU: --%
Connections: --
Requests Today
--
Generations
--
Avg Response
--
Error Rate
--
Active Users (24h)
--
Tokens Used
--
REQUESTS OVER TIME
GENERATION TIME DISTRIBUTION
Log Retention:
days
🧬 Fine-Tune
Train a custom model on your QA data.
DATASET
Raw: 0
Pending: 0
Approved: 0
Rejected: 0
Total: 0
JOBS
No jobs yet
LIVE LOG
🤖 Agent
Run autonomous test generation — read, enrich, generate, validate, push.
STORY
Story Key (optional)
Story Text
CONFIGURATION
Modules
Framework
Style
Max Retries
2 retries
Auto-Push Threshold
Score ≥ 4
ADVANCED CONFIG
Brain Provider
Brain Model
Max Iterations
Approval Levels
AGENT STATUS
○ Reasoning (ReAct loop)
○ Generating tests
○ Complete
LIVE LOG
RECENT JOBS
No jobs yet