Which AI model works for your business?
Switzerland-specific AI benchmarking in DE/FR/IT. We evaluate models on regulatory, legal, and financial tasks that matter for Swiss enterprises.
Performance Products
- Model rankings and head-to-head comparisons
- Failure mode analysis and selection recommendation
- Standard mode: quarterly benchmark intelligence
- Custom mode: full pipeline against your model
- Full ranking table with domain-specific performance
- Swiss language quality (DE/FR/IT)
- EU AI Act compliance scores
- Total cost of ownership analysis
- Cybersecurity, Finance, Medical domains available
- Models run locally. No data leaves your premises
- Custom fine-tuning on your data available on request
Built for Swiss reality.
Swiss-Bench covers 395 proprietary evaluation scenarios (3 regulatory domains, 7 task types, 3 official languages) testing models in German, French, and Italian on Swiss-specific tasks. Unlike generic benchmarks, Swiss-Bench measures what matters for Swiss enterprises: real-world performance on regulatory, legal, and financial tasks in all three official languages.
We tested 50+ domain models. Four passed our quality bar.
Most fine-tuned models on HuggingFace publish inflated benchmark scores. We evaluated over 50 domain-specific open-source models across cybersecurity, finance, and medicine, using our full evaluation stack including Swiss-Bench. We rejected models with regressions, unverifiable claims, or restrictive licenses. Four models demonstrated real, measurable improvement over their base models.
| Model | Domain | Size | Domain Delta | HAAS Score |
|---|---|---|---|---|
| Helvetic Med 14B | Medical | 14B | +6.5pp vs base | 77.6 |
| Helvetic Cyber 8B | Cybersecurity | 8B | +7–13pp vs base | 77.2 |
| Helvetic Finance 8B | Finance | 8B | +19.7pp vs base | 74.1 |
| Helvetic Med 4B | Medical | 4B | +13.7pp vs base | 71.6 |
What makes these models different?
Each model in the Helvetic AI Select library has been independently evaluated against its base model. We tested for domain accuracy gains, safety regressions, Swiss language performance (DE/FR/IT), and EU AI Act compliance. Models that show inflated benchmarks or real-world regressions were rejected, including a model that scored 72.5% on leaderboards but dropped 29 percentage points on clinical cases.
Fine-tuning: when a small model beats the large ones.
Domain-specific fine-tuning on curated, expert-verified data can dramatically outperform general-purpose models. A fine-tuned 8B parameter model, trained on a meticulously designed domain-knowledge-driven instruction dataset, consistently outperforms models with 10–25× more parameters on domain-specific tasks.
Cybersecurity: CyberPal-CH
| Model | Parameters | CyberBench-CH Score | Runs Locally |
|---|---|---|---|
| GPT-4o | >200B (est.) | 68% | No (API only) |
| Llama 3 70B (base) | 70B | 61% | No (too large) |
| Foundation-Sec-8B (Cisco) | 8B | 59% | Yes |
| Qwen 2.5 8B (base) | 8B | 51% | Yes |
| CyberPal-CH 8B (fine-tuned) | 8B | 79% | Yes |
The intelligence you receive.
“Which model should we use?”
Your team is choosing between 3–5 AI models for a Swiss-German customer service chatbot. Vendor benchmarks rarely reflect real-world Swiss performance. Our benchmark report shows exactly which model handles Verwaltungsdeutsch, French, and Italian, with accuracy scores, hallucination rates, and operational cost estimates. You make the decision with data, not opinions.
“Is our AI making things up?”
Your AI system cites Swiss regulations in customer-facing responses. But does Art. 41 OR actually say what the model claims? Our evaluation quantifies the hallucination rate: which topics are reliable, where does the model fabricate facts, and how often does it invent legal references that don’t exist.
“Can we trust the numbers it generates?”
Your AI processes financial reports, insurance claims, or patient summaries. A single wrong figure: an incorrect premium calculation, a fabricated lab value, a misquoted balance sheet entry, creates liability. Our domain-specific benchmarks measure factual accuracy on Swiss financial data, healthcare terminology, and industry-specific reasoning, so you know exactly where the model is reliable and where it needs guardrails.
What you receive.
Schedule a scoping call.
Start with a 5-model evaluation or commission a full 30+ model sweep. First step is always a scoping call. No preparation needed.