Over the years, I’ve learned that building data and AI solutions isn’t just a technical challenge. It’s also an ethical one. We’ve all seen what happens when ethics are ignored in data and AI. Poorly governed algorithms have harmed citizens on a massive scale. Sensitive data has been leaked in contexts where trust should have been absolute, such as healthcare. Personal information has been exploited for manipulation in the digital sphere. The damage is never abstract; it is deeply personal, affecting livelihoods, well-being, and trust in institutions.
This is why, for me, ethics is not a side note. It is the foundation of how I design and build technology. Every data platform I work on is guided by a simple question: Are we doing the right thing for the client, their users, and society?
Why data and AI solutions needs ethics
Data platforms are powerful. They allow organizations to integrate, process, and analyze large amounts of information. With that power comes responsibility. Especially with the rise of Generative AI, the stakes are higher than ever:
- We suddenly have access to more data than ever before.
- AI tools can generate insights at an incredible pace.
- The same tools can also be misused or produce misleading results.
Faster, cheaper, easier is not always better: When shortcuts are taken, the consequences can be severe. Sensitive personal information might end up in the wrong hands. Automated decisions can reinforce bias or exclude vulnerable groups. And AI-generated insights may look convincing but be completely wrong, leading to flawed strategies, wasted investments, or reputational damage that takes years to repair.
For many of our clients, these are not theoretical risks. They operate in (regulated) environments where compliance and trust are necessary. A single misstep in how personal data is processed can result in fines, audits, or a loss of public trust. That is why I believe ethics is not a luxury or a marketing claim. It is the foundation of sustainable business. Without it, the very technology meant to empower organizations can end up harming the people it is supposed to serve.
Building safeguards into the platform
My first instinct as a technologist is to see how risks can be managed by design. A data platform should not just be a place where data flows; it should be a system that actively protects the people behind that data. That means thinking carefully at every stage – extraction, storage, and processing – about how to embed safeguards into the technology itself.
Some of the measures I always look at are:
- Minimizing exposure of sensitive data. The less personal or sensitive information moves through a system, the lower the risk. In almost all cases this means leaving data out altogether or reducing it to only what is strictly necessary.
- Anonymization or pseudonymization. By stripping away or masking identifiers, data can still be useful for analysis without exposing individuals. It also makes it nearly impossible to reverse engineer results to trace back to Personally Identifiable Information (PII) in the source systems. This is one of the most effective ways to balance business value with privacy and reduce the risk of re-identification.
- Strict access controls and monitoring. Even the best-protected data can be compromised if too many people have access. Role-based access, logging, and continuous monitoring make sure that only those who need data can see it, and that any misuse can be detected and addressed quickly.
- Traceability and auditability of transformations. Every change to data should leave a footprint. This makes it possible to explain where numbers came from, to correct mistakes, and to pass audits without scrambling to rebuild history.
These safeguards are not just technical “best practices.” They are the backbone of trust. They make compliance with frameworks like GDPR or ISO 27001 more straightforward, but more importantly, they reduce real risks: the risk of a damaging leak, the risk of bias creeping into analysis, the risk of leadership losing confidence in the numbers they see.
In the end, good technical design creates a platform that does not just process data but actively protects it. And that protection is what allows organizations to use data with confidence knowing that the technology itself has ethics built into its core.
When technology isn’t enough
But not everything can be solved with code. Even the most secure and well-designed platform can be undermined if the organization around it doesn’t act responsibly. That’s why organizational measures matter just as much. They set the rules of the game and make sure that the technology is used in the spirit it was intended.
This comes down to a few essentials:
- Clear policies that define how data should and should not be used. Without them, people are left to make their own judgment calls, and that is exactly how misuse, bias, or “creative” shortcuts slip in. Policies make the boundaries explicit.
- Processing agreements that spell out responsibilities between all parties. Data often flows across organizations, and when something goes wrong, accountability can disappear into the gaps. Agreements create clarity: who owns which data, who is allowed to do what, and who is accountable if something goes wrong.
- Documented processes that guide teams in their daily work. Good practice cannot depend on a few experts who “just know what to do.” Processes make sure that handling sensitive data is consistent, repeatable, and auditable.
These measures are not about bureaucracy for its own sake. They are about building confidence. They give leadership assurance that risks are managed, and they give teams the guidance they need to do the right thing under pressure.
In my experience, the most robust solutions come when technical safeguards and organizational measures reinforce each other. Encryption without access policies leaves cracks. Policies without monitoring are wishful thinking. It’s the combination that makes ethics more than an intention and makes it part of daily reality.
How i-spark turns principles into practice
This way of working fits naturally with i-spark’s DNA. We believe that the right decisions aren’t always the fastest ones. When challenges arise, we choose integrity over speed. We choose to act with care, even if that means pausing or adjusting the pace of a project. Slowing down or even halting a project is not a failure; it is our commitment to doing things right. A moment that tested our approach arose when a client asked us to proceed with a project involving sensitive personal data. In this domain, even small mistakes can have enormous consequences for people’s lives.
Instead of rushing ahead, we deliberately slowed down. Together with the client, we:
- Described the technical safeguards that would be applied.
- Documented how the processes would work in practice.
- Identified which policies and agreements were needed to close the gaps.
We even made the choice not to ingest personal identifiers at all, and even without it we built a dataset that still supported the business case. Every step was documented so it could be audited later.
The outcome was not just a compliant solution. The client could demonstrate governance, reduce risks, and gain confidence that the data was being handled responsibly. More importantly, it helped prevent the kind of harm that poorly governed data and algorithms can cause at scale: loss of trust, damaged reputations, and real consequences for ordinary people.
For me, that moment captured what doing business ethically really means. It is not about slowing things down for the sake of it. It is about protecting people, reducing risks, and making sure platforms are trustworthy and auditable from day one.
Looking ahead
When I look at the pace at which AI is being adopted, I sometimes worry. Everyone seems to be in a rush to move faster, cheaper, and bigger. And of course, there is truth in the idea that those who adapt quickly often survive better. Speed can matter. But in my view, survival alone is not the same as building something lasting. Evolution takes time, and what really endures are the systems, businesses, and relationships built on trust.
That is why I believe the real winners will not only be the ones who adapt fastest, but the ones who move with integrity.
That is the path I’ve chosen as a technologist, and it’s the path we’ve chosen at i-spark. We have shown we’re willing to slow down, even hit pause, if that’s what it takes to do the right thing. For me, this is the only way I can stand behind the solutions we build. Ethics is not about words but about the choices we make. It is the way we work, the way we earn trust, and the way we make sure data and AI solutions truly serve people.
That’s why I believe in building data and AI solutions on an ethical foundation.