AI Ethical Statement
Our commitment
At Workspace, we build software that businesses and their customers rely on every day. As artificial intelligence becomes part of more of the products we design and build, we have made a clear choice about how we work with it: thoughtfully, transparently, and with people in mind.
This statement describes how we approach AI in our internal work and in the solutions we deliver to clients. It is not a marketing claim, but a working document that we hold ourselves to.
Our principles
Our approach follows the European Commission's Ethics Guidelines for Trustworthy AI and the obligations introduced by the EU AI Act. Seven principles guide everything we build.
1. Human oversight
People stay in charge. We design AI features so that a person can review, correct, or stop the system when it matters. Automation is a tool, not a replacement for human judgement.
2. Technical robustness and safety
We test the systems we build, including AI components, against realistic scenarios and edge cases. When AI is part of a critical workflow, we plan for failure modes, fallback behaviour, and clear error states.
3. Privacy and data governance
We handle data carefully. Client data is never used to train third-party models without explicit written consent. We select AI vendors whose data processing terms meet GDPR and align with the contractual obligations of our clients.
4. Transparency
When users interact with an AI feature, they should be able to tell. We label AI-generated content where that helps, document where a model is used inside a product, and give clients clear information about which capabilities are powered by AI and which are not.
5. Fairness and non-discrimination
We review AI behaviour for bias during development, especially in features that influence access, recommendations, or decisions about people. When a model produces results that look unfair, we treat it as a defect and fix it.
6. Social and environmental responsibility
Not every problem needs an AI solution. When a simpler approach would serve our clients and their users better, we say so, and we factor in the energy cost of running large models when we choose an architecture.
7. Accountability
Someone is always responsible for what we ship. Our teams document the AI systems they integrate, record meaningful changes, and respond when issues are reported. Accountability does not end at deployment.
How this looks in practice
Principles only matter when they show up in the work. These are some of the concrete steps we take:
- Engineers and designers using AI assistants follow internal guidance on what may and may not be shared with these tools.
- Client source code, customer data, and confidential business information are kept out of AI tools that retain inputs for training.
- Every AI feature we deliver to a client is reviewed by a person before it ships, and we hand over the documentation a client needs to maintain it.
- We track new requirements coming from the EU AI Act and adjust our internal practices as obligations come into force.
- Team members receive regular training on responsible AI use, including how to recognise hallucinations, bias, and security risks.
Continuous improvement
AI is moving quickly. The way we work today will not be the way we work in twelve months, and this statement is written so that it evolves with our practice. We review it at least once a year, and whenever a significant change in regulation, technology, or our internal process calls for it.
Get in touch
If you have a question, a concern, or feedback about how we use AI, write to us at hello@workspace.hr. We take these messages seriously and we respond.
For more on how we handle data, see our Privacy Policy. For company details, see our Legal Info.

