How We Work The Lab Thinking Proof About
Start a Conversation
Data Protection in Paradise — Part 7

The Algorithm Will See You Now — AI, Automated Decisions, and Constitutional Rights Under the PDPA

Two words added. The entire accountability landscape for AI in Sri Lanka, transformed. The amendment quietly created one of the most progressive automated decision-making accountability frameworks in the region.

In September 2025, CBSL Governor Dr. Nandalal Weerasinghe addressed the Capital Market Authority’s annual conference. His speech covered the usual topics — monetary policy, economic stabilisation, financial sector modernisation. But buried in his remarks about fintech innovation was a sentence that should have made every AI vendor, every credit scoring company, and every automated decision-making platform in Sri Lanka sit up very straight.

He said, in effect, that the Central Bank expected financial institutions to be able to explain how their algorithms made decisions that affected customers. Not in vague terms. Not in marketing language. In terms that a regulator — and ultimately a customer — could understand.

This expectation is now backed by law. The 2025 amendment to the PDPA changed two words in Section 18, and those two words quietly created one of the most progressive automated decision-making accountability frameworks in South Asia.

What Section 18 Actually Says

Section 18 of the PDPA addresses automated individual decision-making. In its amended form, it establishes a right for data subjects not to be subject to a decision based solely on automated processing — including profiling — which produces legal effects concerning the data subject or similarly significantly affects the data subject.

The two words the amendment added were “similarly significantly.” The original Act referred only to decisions that produced “legal effects.” The amendment expanded this to decisions that produce “legal effects or similarly significantly affect” the data subject.

This expansion is enormous. A legal effect is relatively narrow — a decision that changes your legal rights, obligations, or status. Denying a loan application. Cancelling an insurance policy. Refusing a government licence. But “similarly significantly affects” captures a far broader range of automated decisions: credit scoring that determines your interest rate, insurance pricing that uses your postcode as a proxy, employment screening algorithms that filter CVs, content recommendation systems that shape what information you see.

The right under Section 18 is not absolute. There are exceptions for decisions that are necessary for entering into or performing a contract, authorised by law, or based on the data subject’s explicit consent. But even where these exceptions apply, the data subject retains the right to obtain human intervention, to express their point of view, and to contest the decision.

Section 17 provides the procedural framework that supports these rights. It requires controllers to provide meaningful information about the logic involved in automated decision-making, as well as the significance and envisaged consequences of such processing for the data subject. This is not a right to be told that an algorithm was used. It is a right to be told how the algorithm works and what it means for you.

The Constitutional Dimension

What makes Sri Lanka’s automated decision-making provisions particularly powerful is their interaction with the Constitution.

Chapter III of the Constitution of Sri Lanka enshrines fundamental rights. Article 12 guarantees equality before the law and equal protection of the law. Article 12(2) specifically prohibits discrimination on grounds including race, religion, language, caste, sex, political opinion, and place of birth. Article 14(1)(g) guarantees the freedom to engage in any lawful occupation, profession, trade, business, or enterprise.

Article 126 provides that the Supreme Court has sole and exclusive jurisdiction to hear and determine any question relating to the infringement or imminent infringement of fundamental rights by executive or administrative action. A fundamental rights application must be filed within one month of the alleged infringement.

Now consider what happens when an algorithm makes a decision that discriminates on grounds protected by Article 12. A credit scoring model that uses postcode as a variable, effectively discriminating against applicants from predominantly Tamil or Muslim areas. An insurance pricing algorithm that uses proxies for ethnicity or religion. An employment screening tool that filters out graduates from certain universities, which in Sri Lanka correlates with language, ethnicity, and socioeconomic background.

These are not hypothetical scenarios. They are well-documented patterns in algorithmic decision-making globally. And in Sri Lanka, they do not merely violate the PDPA. They potentially violate fundamental rights guaranteed by the Constitution.

The PDPA’s Section 18, combined with the Constitution’s fundamental rights protections, creates a framework where algorithmic discrimination is not just a data protection violation — it is a constitutional matter. This is a more powerful accountability mechanism than exists in most jurisdictions, including the EU.

Where Algorithms Meet Real Life

Let me walk through the sectors where automated decision-making is already happening in Sri Lanka, and where the PDPA’s provisions will have the greatest impact.

Banking and Credit

Credit scoring is the most obvious application. Every major bank in Sri Lanka uses some form of automated credit assessment. Some use relatively simple rule-based systems. Others use machine learning models that consider dozens or hundreds of variables to predict creditworthiness.

The post-conflict context makes this particularly sensitive. In the Northern and Eastern Provinces — in Kilinochchi, in Batticaloa — decades of conflict destroyed conventional credit histories. A credit scoring model trained on historical data will systematically disadvantage applicants from these areas, not because they are less creditworthy, but because the data is thinner, less complete, and shaped by decades of disruption. This is not a bug in the algorithm. It is a feature of the data — and under Section 18, it is now something that must be disclosed, explained, and subject to human review.

Insurance

Insurance underwriting has always been about risk assessment. Increasingly, that risk assessment is automated. Postcode-based pricing — where your address determines your premium — is common in motor and property insurance. But in Sri Lanka, postcodes are proxies for ethnicity, income, and conflict history. An algorithm that prices based on postcode is, whether intentionally or not, pricing based on who you are and where you come from.

Section 18 requires that data subjects have the right to contest these decisions and obtain human intervention. This means insurers will need to be able to explain, in meaningful terms, why a particular premium was calculated — and they will need a process for reviewing that calculation when challenged.

Employment

Automated CV screening is increasingly common in Sri Lanka’s larger employers. These systems scan applications for keywords, qualifications, experience patterns, and other signals. But in Sri Lanka, employment screening carries hidden signals that algorithms can detect and amplify. The format of a National Identity Card (NIC) number encodes information about the holder. The name of a school or university carries signals about language, location, and socioeconomic background. An address reveals neighbourhood, which in many parts of Sri Lanka correlates with ethnicity.

An employer using automated screening must now grapple with whether its algorithm is making decisions that “similarly significantly affect” applicants — and if so, whether those decisions can be explained, contested, and reviewed by a human.

Government Services

As Sri Lanka digitises government services, automated decision-making is entering public administration. Subsidy allocation, benefit eligibility, permit approvals — these processes increasingly involve algorithmic assessment. The stakes are high: a wrongly denied subsidy can mean the difference between eating and not eating for a low-income family.

Public authorities using automated decision-making face an even higher bar under the PDPA because their decisions are more likely to produce “legal effects” and because they are subject to the constitutional fundamental rights framework through Article 126. The combination of the PDPA and the Constitution creates a robust accountability mechanism for government algorithms.

Telecommunications

Telecommunications operators use automated systems for credit checks on postpaid connections, for identifying customers for retention offers, for detecting fraud, and for personalising services. Each of these applications involves automated processing that can “similarly significantly affect” a customer — from denying them a postpaid connection to flagging their account for fraud investigation.

The Explainability Problem

Section 18 creates the right to contest automated decisions and obtain human review. Section 17 creates the right to “meaningful information about the logic involved.” But what does “meaningful information about the logic” actually require?

Schedule V of the PDPA, item 1(m), elaborates on this requirement. Controllers must provide “reasonably meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing.” The phrase “reasonably meaningful” is doing a lot of work here.

For simple rule-based systems, explainability is straightforward. “Your loan application was declined because your income-to-debt ratio exceeds 40% and your credit score is below 600.” Clear. Meaningful. Actionable.

For machine learning models — particularly deep learning models, ensemble methods, and other complex architectures — explainability is genuinely difficult. A neural network that considers 200 variables and has millions of parameters does not make decisions in a way that maps neatly onto human reasoning. You cannot point to a single variable and say “this is why.”

This is where the choice of model architecture becomes a compliance consideration. Techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) can provide post-hoc explanations for individual predictions from complex models. But these explanations are approximations — they tell you which variables were most important for a particular decision, but they do not fully explain how those variables interacted to produce the outcome.

Organisations facing Section 18 obligations will need to make deliberate choices about model selection. A simpler, more interpretable model that provides genuine explainability may be preferable to a more accurate but opaque model — particularly for decisions that produce legal effects or similarly significantly affect data subjects. The tradeoff between accuracy and explainability is not just a technical question. Under the PDPA, it is a legal one.

The DPIA Connection

Section 24 of the PDPA requires Data Protection Impact Assessments for processing that is likely to result in a high risk to data subjects. Automated decision-making, particularly profiling, is one of the clearest triggers for a DPIA.

The DPIA is where algorithmic fairness moves from abstract principle to concrete practice. A properly conducted DPIA for an automated decision-making system should assess not only the technical risks (data quality, model accuracy, security) but also the fairness risks (bias, discrimination, disproportionate impact on vulnerable groups) and the rights risks (ability to exercise Section 18 rights, access to human review, effectiveness of explainability mechanisms).

Critically, the DPIA is not a one-time exercise. Automated decision-making systems are not static. They are retrained on new data. Their performance drifts over time. The populations they serve change. A DPIA that was adequate at deployment may be inadequate six months later if the model’s behaviour has changed. Continuous monitoring — not just of model performance but of model fairness — is an implicit requirement of the PDPA’s risk-based approach.

What Organisations Should Do

Inventory Your Automated Decisions

Start with a comprehensive inventory of every automated decision-making process in your organisation. Do not limit this to systems you think of as “AI.” Include rule-based systems, scoring models, automated workflows, and any process where a decision that affects an individual is made without human intervention. You will almost certainly discover automated decisions you did not know existed — buried in operational workflows, embedded in vendor platforms, or running in legacy systems that nobody has reviewed in years.

Assess Constitutional Risk

For each automated decision-making process, assess whether it could produce outcomes that discriminate on grounds protected by Article 12 of the Constitution. This requires looking beyond the algorithm’s design to its actual outcomes. A model that does not explicitly use ethnicity as a variable can still produce ethnically discriminatory outcomes through proxy variables. Test for this. Measure it. Document it.

Build Explainability Into the Design

Do not treat explainability as an afterthought. Build it into the system from the beginning. Choose model architectures that support meaningful explanation. Implement SHAP, LIME, or other interpretability techniques at the design stage, not after deployment. Define, in advance, what “reasonably meaningful information about the logic involved” looks like for each automated decision — and test whether your explanations actually make sense to the people who will receive them.

Test for Fairness

Conduct fairness testing across all protected characteristics. In the Sri Lankan context, this means testing for disparate impact based on ethnicity, religion, language, gender, geographic location, and socioeconomic background. Use established fairness metrics — demographic parity, equalised odds, calibration — and document the results. Where disparities exist, document the justification for them or the mitigation measures being implemented.

Design Human Review Processes

Section 18 requires the ability to obtain human intervention. This means you need a genuine, meaningful human review process — not a rubber stamp. The human reviewer must have the authority to override the algorithm, the information needed to make an independent assessment, and the time to actually review the case properly. An automated decision that can only be “reviewed” by a person who has no authority to change it is not meaningful human intervention.

Conduct DPIAs for Every Significant Automated Decision

Do not wait for the Authority to specify exactly which automated decision-making processes require DPIAs. If a system makes decisions that produce legal effects or similarly significantly affect data subjects, conduct a DPIA. Include fairness analysis, explainability assessment, bias testing, and an evaluation of the human review mechanism. Treat the DPIA as a living document that is updated as the system evolves.

Document Everything

The accountability principle under the PDPA means you must be able to demonstrate compliance. For automated decision-making, this means documenting the model design, the training data, the fairness testing, the explainability mechanisms, the human review processes, the DPIA, and the ongoing monitoring results. When a data subject exercises their Section 18 rights, or when the Authority asks questions, your documentation is your defence.

The Broader Implications

Sri Lanka’s approach to automated decision-making under the PDPA is quietly ambitious. The combination of Section 18’s expanded scope (through “similarly significantly affects”), Section 17’s explainability requirements, Section 24’s DPIA obligations, and the Constitution’s fundamental rights protections creates a multi-layered accountability framework for algorithmic decision-making.

This framework does not ban AI. It does not restrict automation. It does something more nuanced and, ultimately, more useful: it requires organisations that use algorithms to make decisions about people to be able to explain those decisions, to subject them to human review, and to ensure they do not discriminate on constitutionally protected grounds.

In a country where algorithms are increasingly being deployed in banking, insurance, employment, government services, and telecommunications — and where the historical context makes algorithmic bias particularly dangerous — this framework is not just progressive regulation. It is a necessary safeguard.

The algorithm will see you now. The question is whether you can see the algorithm.

Next in the series: The DPO Sri Lanka Doesn’t Have Yet

Need help with PDPA compliance?

We build tools and methodologies for Sri Lanka's regulatory landscape.

Start a conversation