How We Work The Lab Thinking Proof About
Start a Conversation
Data Protection in Paradise — Part 10

Data Protection Impact Assessments — A Practical Guide for Sri Lankan Businesses

The DPIA is the single most useful exercise an organisation can perform to understand where its data risks actually are. This is the guide I wish existed when I first started working through the PDPA’s provisions.

Of all the obligations in the PDPA, the Data Protection Impact Assessment is the one most likely to be done badly. Or not done at all.

This is unfortunate, because the DPIA is also the single most useful exercise an organisation can perform to understand where its data protection risks actually are. Done well, a DPIA forces you to think systematically about what data you are processing, why, what could go wrong, and what you are doing about it. Done badly, it is a twenty-page document that nobody reads, filed in a compliance folder that nobody opens.

The difference between a useful DPIA and a useless one is not complexity. It is honesty. A good DPIA tells you uncomfortable truths about your data practices. A bad DPIA tells you what you want to hear. This guide is designed to help you produce the first kind.

When You Need a DPIA

Section 24(1) of the PDPA sets out three triggers that require a controller to conduct a DPIA before processing begins:

Trigger 1: Systematic evaluation or profiling. If you are systematically evaluating personal aspects of individuals — including profiling — and this evaluation produces legal effects or similarly significantly affects the data subject, you need a DPIA. This covers credit scoring, automated hiring decisions, insurance risk assessment, customer segmentation that determines pricing or access to services, and any system that makes or informs decisions about individuals based on automated analysis of their personal data.

Trigger 2: Systematic monitoring of public areas or networks. If you are systematically monitoring publicly accessible areas — CCTV systems, Wi-Fi tracking, footfall analytics — or monitoring electronic communications networks, you need a DPIA. This is particularly relevant for retail operators, property managers, transport operators, and telecommunications companies.

Trigger 3: Categories determined by the Authority. The Data Protection Authority has the power to publish lists of processing operations that require a DPIA. As of March 2026, no such list has been published. But when it comes, it is likely to follow GDPR precedent and include processing of special categories of data on a large scale, large-scale processing of children’s data, processing involving new technologies, and processing that involves cross-referencing or combining datasets.

The practical guidance is simple: err on the side of conducting a DPIA. If you are asking yourself whether you need one, you probably do. The cost of conducting an unnecessary DPIA is a few days of work. The cost of not conducting a necessary one is a compliance violation and, more importantly, a missed opportunity to identify risks before they materialise.

The Five-Phase Framework

The PDPA does not prescribe a specific methodology for conducting DPIAs. This is both a freedom and a challenge. The following five-phase framework is designed to be practical, thorough, and applicable to any Sri Lankan organisation regardless of size or sector.

Phase 1: Describe the Processing

Before you can assess the risks of a processing activity, you need to describe it precisely. This means documenting four things: the nature of the processing (what you actually do with the data), the scope (how much data, how many people, what geographic area, what time period), the context (the relationship between you and the data subjects, their reasonable expectations, the current state of technology), and the purpose (why you are processing this data, what you hope to achieve).

Be specific. “We process customer data for marketing purposes” is not a description. It is a vagueness. A proper description looks like this: “We collect mobile phone numbers, purchase history, and location data from customers who use our loyalty card at 47 retail outlets across the Western Province. We analyse this data using automated segmentation to identify purchasing patterns and generate personalised promotional offers, which we deliver via SMS and WhatsApp to approximately 200,000 active loyalty card holders on a weekly basis.”

The description should also map the data flows. Where does the data come from? Where is it stored? Who has access to it? Does it leave the organisation? Is it shared with processors or third parties? Is it transferred across borders? Draw the diagram. Follow the data from collection to deletion. If you cannot map the flow, you do not understand the processing well enough to assess its risks.

Phase 2: Assess Necessity and Proportionality

Once you have described what you are doing, you need to ask whether you should be doing it at all — or at least, whether you should be doing it this way.

The necessity test: Is this processing necessary to achieve the stated purpose? Could you achieve the same purpose with less data, or less intrusive processing? If you are collecting date of birth, home address, NIC number, and employer details for a supermarket loyalty programme, is all of that necessary? Or could a mobile number and a loyalty card number achieve the purpose just as effectively?

The proportionality test: Even if the processing is necessary, is it proportionate? Does the benefit to the organisation justify the impact on the data subjects? A CCTV system covering every aisle of a supermarket may be necessary for loss prevention, but is it proportionate to also use that footage for customer behaviour analytics? The loss prevention purpose might justify the cameras. The analytics purpose requires a separate justification.

The legal basis test: What is your legal basis for this processing under the PDPA? Is it consent? Legitimate interest? Contractual necessity? Legal obligation? Whatever basis you claim, can you actually demonstrate it? If you say consent, do you have records of consent that meet Schedule III standards? If you say legitimate interest, have you conducted a balancing test?

Document your reasoning. This is not a box-ticking exercise. The Authority will want to see that you genuinely considered whether the processing is necessary and proportionate, and that you have a defensible legal basis for proceeding.

Phase 3: Identify and Assess Risks

This is the heart of the DPIA. And the first question to ask is: risk to whom?

A DPIA is not a risk assessment for your organisation. It is a risk assessment for the data subjects. The risks you are looking for are risks to the rights and freedoms of the individuals whose data you are processing. This is a critical distinction that many organisations get wrong. A data breach is a risk to your organisation because of fines and reputational damage. But in a DPIA, the risk is what the breach means for the people — identity theft, financial loss, discrimination, embarrassment, physical harm.

There are eight categories of risk to consider:

Confidentiality risks. Unauthorised access to or disclosure of personal data. Data breaches, insider threats, inadequate access controls, insecure data sharing.

Integrity risks. Inaccurate or incomplete data leading to wrong decisions. Incorrect credit scores, wrong medical records, outdated address information used for identity verification.

Availability risks. Data subjects unable to access their own data or exercise their rights. System downtime preventing access requests, deleted data that should have been retained, lost records.

Discrimination risks. Processing that could lead to unfair treatment based on protected characteristics. Algorithmic bias in automated decisions, proxy discrimination through correlated variables, profiling that reinforces existing inequalities.

Financial risks. Processing that could lead to financial loss for data subjects. Fraud enabled by data exposure, unfair pricing based on profiling, denial of financial services based on inaccurate data.

Reputational risks. Processing that could damage the reputation or social standing of data subjects. Disclosure of sensitive information, association with controversial activities, exposure of private behaviour.

Psychological risks. Processing that could cause distress, anxiety, or emotional harm. Intrusive surveillance, manipulation through personalised targeting, exposure to unwanted content.

Physical risks. Processing that could lead to physical harm to data subjects. Location tracking enabling stalking, disclosure of addresses to hostile parties, medical data breaches affecting treatment.

For each identified risk, assess both the likelihood of the risk materialising and the severity of the impact if it does. Use a simple matrix — high, medium, low for each dimension. Be honest. If your customer database has weak access controls and contains NIC numbers and home addresses, the likelihood of unauthorised access is not “low” just because it hasn’t happened yet.

The Sri Lankan context matters here. Some risks that might be theoretical in other jurisdictions are very real in Sri Lanka. Ethnic tensions mean that disclosure of ethnicity or religion data carries risks that go beyond embarrassment. The post-conflict environment in the north and east means that certain categories of personal data — political affiliations, movement patterns, association data — carry heightened sensitivity. Political opinion data in a highly polarised political environment can have real consequences for individuals. Your risk assessment must reflect the actual context in which your data subjects live, not a generic international template.

Phase 4: Identify Mitigations

For every risk you have identified, you need to identify measures to mitigate it. Mitigations fall into four categories:

Technical measures. Encryption at rest and in transit. Access controls and authentication. Pseudonymisation or anonymisation. Automated data retention and deletion. Intrusion detection. Audit logging. Secure development practices.

Organisational measures. Data protection policies and procedures. Staff training and awareness. Incident response plans. Data sharing agreements. Vendor management and processor oversight. Regular audits and reviews.

Data minimisation measures. Collecting less data. Retaining data for shorter periods. Restricting access to smaller groups. Aggregating or anonymising data where the full dataset is not needed for the purpose. Deleting data that is no longer necessary.

Design measures. Privacy by design and by default. Building consent management into systems from the start. Designing user interfaces that make privacy choices clear and accessible. Architecting systems so that data protection is structural, not bolted on.

For each mitigation, document three things: what the measure is, how it reduces the identified risk, and what the residual risk is after the measure is implemented. No mitigation eliminates risk entirely. The question is whether the residual risk is acceptable. If it is not, you need additional mitigations or you need to reconsider whether the processing should proceed at all.

Phase 5: Document, Review, and Maintain

A DPIA is a living document. It is not something you produce once and file. The processing it describes will change over time. New risks will emerge. New mitigations will become available. The regulatory environment will evolve.

Section 24(4) of the PDPA requires a fresh DPIA — or a review of the existing one — whenever there is a change in the risk represented by the processing. This means that if you change the data you collect, the purposes you process it for, the technology you use, the recipients you share it with, or the context in which you operate, you need to revisit the DPIA.

The amended Section 24(3) requires that the Data Protection Officer must be involved in the DPIA process. This is not optional. The DPO’s involvement must be documented, and their advice must be recorded, even if the organisation ultimately decides not to follow it.

Under the amended Section 24(5), the Authority has the power to request your DPIA. This means it needs to be in a state where it can be produced on demand. It needs to be current, complete, honest, and comprehensible. A DPIA that was last updated two years ago, that describes processing that has fundamentally changed, and that contains risk assessments that are no longer accurate will not satisfy the Authority. It may, in fact, make things worse by demonstrating that you identified risks and then failed to manage them.

A Practical Example

Let us work through a concrete example. A licensed commercial bank in Sri Lanka is deploying an AI-powered credit scoring system. The system analyses applicant data — income, employment history, existing debts, transaction patterns, repayment history — and generates a credit score that is used to determine whether to approve or decline loan applications, and at what interest rate.

Processing Description

The system collects personal data from loan applications (NIC number, income declarations, employment details), combines it with existing bank data (transaction history, account behaviour, previous loan performance), and feeds it into a machine learning model that generates a credit score between 0 and 1000. Scores below 400 result in automatic decline. Scores between 400 and 600 are referred for manual review. Scores above 600 are automatically approved at risk-adjusted interest rates. The system processes approximately 5,000 applications per month across all branches.

Triggers

This processing triggers the DPIA requirement under Section 24(1) on the first ground: it involves systematic evaluation and profiling of individuals that produces legal effects (loan approval or denial) and similarly significantly affects them (interest rate determination). There is no ambiguity here. A DPIA is required.

Necessity and Proportionality

Credit assessment is necessary for responsible lending. Automated scoring enables faster decisions and consistent criteria. The data used is directly relevant to creditworthiness. The processing is arguably necessary and proportionate. However, the necessity of fully automated decisions for the 400-600 band warrants scrutiny — the manual review component is a proportionality measure.

Key Risks

Discrimination risk (high severity, medium likelihood). The model may encode biases present in historical lending data. If past lending practices discriminated against certain ethnic groups, geographic areas, or gender, the model will learn and perpetuate these biases. This is particularly sensitive in Sri Lanka, where certain communities in the north and east faced systemic financial exclusion during and after the conflict. An AI model trained on this historical data will treat geographic origin as a risk signal when it is actually a legacy of discrimination.

This risk also engages Article 12 of the Constitution — the right to equality and non-discrimination. A credit scoring system that produces systematically different outcomes for different ethnic groups, even indirectly through proxy variables, creates a constitutional risk that goes beyond data protection.

Transparency risk (high severity, medium likelihood). The machine learning model is a black box. Applicants who are declined cannot be told, in meaningful terms, why. This conflicts with the PDPA’s requirements around automated decision-making and the right to meaningful information about the logic involved.

Accuracy risk (medium severity, medium likelihood). The model’s training data may not reflect current economic conditions. Sri Lanka’s economy has undergone dramatic shifts in recent years. A model trained on pre-crisis data may not accurately predict creditworthiness in the current environment.

Financial risk (high severity, medium likelihood). Incorrect automated decisions could deny creditworthy applicants access to finance or approve applicants at punitive interest rates based on flawed scoring.

Mitigations

For the discrimination risk, the bank implements bias testing across protected characteristics, using demographic parity and equalised odds metrics. The model is tested for proxy discrimination — checking whether variables like postal code or employment sector serve as proxies for ethnicity. SHAP (SHapley Additive exPlanations) values are used to identify which features drive individual decisions, enabling meaningful explanations to declined applicants and allowing auditors to identify discriminatory patterns.

For the transparency risk, the bank builds an explanation engine that translates SHAP values into plain-language reasons for each decision. Declined applicants receive a letter stating the top three factors that contributed to their score, expressed in terms they can understand and, where possible, act upon.

For the accuracy risk, the bank implements quarterly model retraining and monthly performance monitoring, with automatic alerts when prediction accuracy drops below threshold levels.

For the financial risk, the manual review band (400-600) provides a human check on borderline cases. The bank also implements an appeals process where declined applicants can request a fully human review.

Residual Risk

After mitigations, the residual discrimination risk is reduced from high to medium. It cannot be eliminated entirely because the training data inherently reflects historical patterns. The residual transparency risk is reduced from high to low through the explanation engine. The residual accuracy and financial risks are reduced through monitoring and human review. The overall assessment is that the processing can proceed with the identified mitigations in place, subject to quarterly review and ongoing monitoring.

Common Mistakes

Having reviewed numerous DPIAs across various jurisdictions and industries, the same mistakes appear again and again. Sri Lankan organisations will make these mistakes too, unless they are warned.

The rubber stamp. A DPIA that is conducted after the decision to proceed has already been made, designed to confirm and justify rather than to genuinely assess. If the conclusion was written before the analysis, it is not a DPIA. It is a rationalisation. The Authority will see through it.

The consultant special. A DPIA outsourced entirely to a consultant who has never seen the actual processing, based on generic templates, filled with boilerplate risk assessments and off-the-shelf mitigations. The result reads beautifully and says nothing. Consultants can help with methodology and quality assurance, but the substance — the description of the processing, the assessment of necessity, the identification of risks — must come from the people who actually understand the processing. That means your people.

The one-and-done. A DPIA conducted at the start of a project and never revisited. The processing changes, the technology evolves, the context shifts, new risks emerge — but the DPIA stays frozen in time. Section 24(4) explicitly requires review when the risk changes. A stale DPIA is worse than no DPIA, because it creates a false sense of security.

The technical-only assessment. A DPIA that focuses exclusively on information security risks — encryption, access controls, penetration testing — while ignoring the broader data protection risks. Information security is part of the picture, but a DPIA must also address necessity, proportionality, data subject rights, discrimination, fairness, and the broader impact on individuals. A perfectly secure system that processes data unnecessarily or disproportionately still fails the DPIA.

The missing context. A DPIA that could have been written for any country, any industry, any organisation. It ignores the specific Sri Lankan context — the regulatory overlap with CBSL and TRCSL, the ethnic sensitivities, the post-conflict dynamics, the economic conditions, the infrastructure limitations, the cultural expectations around privacy. Context is not decoration. It is the foundation of a meaningful risk assessment.

Getting Started

If you have never conducted a DPIA, the prospect can feel overwhelming. It does not need to be. Here is how to start:

Pick one processing activity. Not your most complex or sensitive processing. Pick something manageable — a customer database, an employee monitoring system, a marketing campaign. Something you understand well and can describe precisely. Use it as your pilot. Learn the methodology on something that won’t break anything if you get it wrong the first time.

Assemble the right team. A DPIA is not a one-person job. You need the people who understand the processing (business owners, system administrators, developers), the people who understand the risks (information security, legal, compliance), and the people who understand the data subjects (customer service, HR, marketing). And your DPO, who must be involved under Section 24(3).

Use this framework. Five phases. Describe, assess necessity, identify risks, identify mitigations, document and maintain. Work through them in order. Be honest. Write it down. Show it to someone who will challenge your assumptions.

Don’t wait for the Authority’s template. The Authority may eventually publish DPIA guidance or templates. They may not. Either way, you cannot wait. The obligation exists in the Act. The methodology is well-established internationally. The risk of doing a DPIA without an official template is zero. The risk of not doing one because you were waiting for an official template is significant.

A DPIA is not a compliance document. It is a thinking exercise. It forces you to confront what you are actually doing with people’s data, why, and what could go wrong. Organisations that take it seriously will understand their own data practices better than they ever have before. That understanding is valuable regardless of what the Authority requires.

Start now. Start with one. Get it right. Then do the rest.

Next in the series: The Rs. 10 Million Question — How the PDPA’s Penalty Regime Actually Works

Need help with PDPA compliance?

We build tools and methodologies for Sri Lanka’s regulatory landscape.

Start a conversation