Artificial intelligence is reshaping how businesses operate β from customer service chatbots and automated hiring tools to predictive analytics and fraud detection. Yet most Swiss SMEs adopting these technologies remain unaware of the legal obligations that come with them. In Switzerland, there is no AI-specific statute. Instead, the revised Federal Act on Data Protection (revFADP), in force since 1 September 2023, is the primary legal framework governing AI systems that process personal data.
This guide explains what the law requires, how Switzerland's approach compares to the European Union's, and what practical steps your SME should take today.
Switzerland Has No AI Law β But the Rules Are Clear
Unlike the European Union, which enacted the AI Act in 2024, Switzerland has not passed dedicated AI legislation. The Federal Council's November 2023 position paper on AI governance opted for a sectoral, risk-based approach rather than a sweeping horizontal regulation.
The Swiss strategy rests on three pillars:
- Technology neutrality: existing laws apply to AI without creating new regulatory layers
- Sector-specific adjustments: high-risk domains such as healthcare, finance, and the judiciary are addressed through targeted amendments to their respective legal frameworks
- International coordination: Switzerland participates in OECD and Council of Europe work on AI governance
In practice, this means the revFADP is the primary instrument applicable whenever an AI system processes personal data β which covers the vast majority of commercially available AI tools.
The FDPIC's Position on AI and Data Protection
The Federal Data Protection and Information Commissioner (FDPIC) has taken consistent and clear positions on AI in the workplace.
In guidance documents and public statements, the FDPIC emphasises:
Transparency is mandatory. People whose data is processed by an AI system must be informed in a meaningful, accessible way β not buried in unreadable terms and conditions.
Human accountability must be preserved. Automated decisions with significant consequences for individuals cannot be fully delegated to algorithms. A meaningful human review mechanism must remain available.
Data minimisation applies to AI training. Training large models does not justify collecting more data than necessary. The proportionality principle applies at every stage, including data acquisition and model training.
Third-party AI tools do not shift responsibility. When you use ChatGPT, Microsoft Copilot, Gemini, or similar services, you remain the data controller. A data processing agreement with the vendor is legally required.
The FDPIC has announced that dedicated AI guidance for the business context will be published by the end of 2026. Until then, the general principles of the revFADP apply in full.
Key revFADP Provisions for AI
Duty to Provide Information (Art. 19 revFADP)
The revFADP requires controllers to inform data subjects when collecting their personal data. In the AI context, this means:
- Your privacy policy must describe which AI tools process user data and for what purposes
- Automated decision-making must be disclosed as such
- The logic behind automated processing that affects individuals must be explained in accessible terms
The duty to inform applies even when data is collected indirectly from third-party sources. The exceptions under Art. 19(2) revFADP β such as disproportionate effort β are to be construed narrowly in the AI context.
Profiling and High-Risk Profiling (Art. 5(f) and (g) revFADP)
The revFADP introduces a precise definition of profiling: any form of automated processing of personal data for the purpose of evaluating certain personal aspects of an individual.
High-risk profiling β a concept specific to Swiss law β is profiling that carries a high risk to the personality or fundamental rights of the data subject.
Common AI applications that qualify as high-risk profiling include:
- Algorithmic credit scoring
- Behavioural analysis for recruitment and talent assessment
- Customer segmentation affecting access to services
- AI-powered employee monitoring systems
For high-risk profiling, the revFADP requires explicit consent or another specific legal basis. A general legitimate interest will typically not suffice.
Data Protection Impact Assessment (Art. 22 revFADP β DPIA)
Art. 22 revFADP mandates a Data Protection Impact Assessment (DPIA) when a processing operation is likely to result in a high risk to the personality or fundamental rights of data subjects.
For AI applications, a DPIA is mandatory in the following scenarios:
| AI Scenario | DPIA Required? |
|---|---|
| High-risk profiling | Yes |
| Automated decisions with legal or significant effects | Yes |
| Large-scale processing of sensitive data | Yes |
| Systematic monitoring via AI | Yes |
| HR chatbot collecting candidate data | Assess case by case |
| Productivity tool with no personal data | No |
If the DPIA concludes that a high residual risk remains despite the measures taken, you must consult the FDPIC before launching the processing operation.
Automated Individual Decisions (Art. 21 revFADP)
Art. 21 revFADP governs automated individual decisions β decisions based solely on automated processing that produce legal effects or significantly affect the individual concerned.
Data subjects are entitled to:
- Information about the automated decision
- The opportunity to express their point of view
- A review of the decision by a natural person
These rights cannot be waived by general terms and conditions. A consent checkbox at onboarding does not grant blanket permission for unlimited automated decision-making.
Data Transfers to US AI Providers
One of the most practically challenging issues for Swiss SMEs is the use of AI services from American providers such as OpenAI, Microsoft, Google, or Anthropic.
The revFADP permits international data transfers only under specific conditions:
1. Countries with an adequate level of protection. The Federal Council maintains a list of countries recognised as offering equivalent protection. The United States does not appear on this list globally.
2. Appropriate safeguards. In the absence of recognition, transfers may rely on:
- Standard contractual clauses approved by the FDPIC
- Binding corporate rules (BCR)
- Explicit consent from the data subject
3. Narrow exceptions. Contract performance or vital interests can justify a transfer, but these exceptions are interpreted strictly and cannot be used as a routine workaround.
Practical implications for SMEs:
Before deploying a US-based AI tool, verify:
- Does the provider offer a data processing agreement (DPA) compliant with the revFADP?
- Can data be processed in Europe (e.g. Azure EU Data Boundary, OpenAI Enterprise)?
- Does the provider use your input data to train its models, and can you contractually prohibit this?
Microsoft Azure OpenAI Service and OpenAI Enterprise offer contract terms with EU data residency commitments. Google Vertex AI allows selection of the data storage region. Verify these options systematically before routing sensitive business data through any AI tool.
The EU AI Act: What It Means for Swiss Companies
Although Switzerland is not an EU member state, the EU AI Act (Regulation EU 2024/1689) has direct implications for Swiss businesses that:
- Sell products or services to customers in the EU
- Use AI systems whose outputs are used within the EU
Risk Classification Under the EU AI Act
| Risk Category | Examples | Requirements |
|---|---|---|
| Unacceptable risk (banned) | Social scoring, real-time facial recognition in public spaces | Complete prohibition |
| High risk | AI recruitment, credit scoring, biometrics, medical devices | Pre-market conformity assessment, transparency, human oversight |
| Limited risk | Chatbots, deepfakes | User disclosure required |
| Minimal risk | Spam filters, recommendation systems | No specific obligations |
Swiss SMEs exporting to the EU that use high-risk AI systems must comply with the AI Act before marketing their products in the Union. Penalties can reach EUR 30 million or 6% of global annual turnover.
Switzerland vs. EU: A Comparative Overview
| Criterion | Switzerland (revFADP) | EU (AI Act + GDPR) |
|---|---|---|
| AI-specific legislation | No | Yes (AI Act) |
| Regulatory approach | Sectoral, risk-based | Horizontal, category-based |
| Penalties | Up to CHF 250,000 (revFADP) | Up to EUR 30M or 6% turnover (AI Act) |
| DPIA | Mandatory if high risk | Mandatory if high risk (GDPR) |
| Automated decisions | Art. 21 revFADP | Art. 22 GDPR |
| High-risk profiling | Explicit consent required | Consent or other legal basis |
| Transfers to USA | Appropriate safeguards required | Appropriate safeguards required |
The key difference lies in penalty severity and regulatory reach: the EU can hold companies worldwide accountable for violations affecting the EU market. Switzerland's enforcement scope is narrower, but the substantive requirements for transparency and proportionality are materially equivalent.
Practical SME Checklist for AI Compliance
Use this checklist to audit the AI tools deployed in your business against revFADP requirements:
Inventory and Governance
- Have you catalogued all AI tools in use across your organisation?
- For each tool, do you know which personal data it processes?
- Have you signed a data processing agreement with every AI vendor?
Transparency
- Does your privacy policy mention the use of AI and the associated processing activities?
- Are data subjects informed when automated decisions are made about them?
- Do you have a documented procedure for human review of automated decisions?
Profiling and DPIA
- Do any of your AI tools perform profiling? If so, is it high-risk profiling?
- Have you conducted a DPIA for each high-risk processing activity?
- If the DPIA identified a high residual risk, have you consulted the FDPIC?
International Data Transfers
- Do your AI vendors process data outside Switzerland or Europe?
- Do you have appropriate safeguards in place for those transfers?
- Have you contractually confirmed that your data will not be used for model training?
Data Subject Rights
- Can your employees and customers exercise their rights (access, rectification, objection) even when their data is processed by AI?
- Does your records of processing activities (RoPA) include all AI-driven processing operations?
What Comes Next in Switzerland
The AI regulatory landscape is evolving quickly. Key developments to watch:
FDPIC AI guidelines: The Commissioner has signalled that specific guidance on AI use in the business context is forthcoming, expected by the end of 2026.
Council of Europe AI Convention (CETS 225): Switzerland is evaluating accession to the first binding international treaty on AI regulation, opened for signature in May 2024. Ratification would impose obligations relating to human rights, democracy, and the rule of law in the context of AI.
Federal AI Strategy update: The original 2019 federal AI strategy is under revision to account for the rapid advances in generative AI and large language models.
The recommendation for SMEs is unambiguous: do not wait for an AI-specific law before acting. The revFADP applies now, and supervisory authorities are actively exercising their oversight powers. Companies that act proactively protect themselves from sanctions and build the customer trust that is increasingly a competitive differentiator.
Are you using AI tools in your business and unsure whether your website meets Swiss data protection requirements? Start with a free analysis from PrivaScan β our tool detects active trackers and cookies on your site in seconds and gives you an immediate picture of your compliance posture.