The paper investigates dark patterns in privacy (DPPs): interface‑design tactics that manipulate users into disclosing more personal data than they intend, to the benefit of the service provider.
Jarovsky’s paper delivers a clear, interdisciplinary framework for identifying and regulating dark patterns in privacy. By tying UI manipulation to cognitive bias theory and positioning the GDPR’s fairness principle as the legal fulcrum, the research equips regulators, designers, and legal practitioners with a concrete roadmap to detect, assess, and ultimately prevent privacy‑harmful design practices.
Guiding questions:
The study aims to (i) propose a precise definition, (ii) build a taxonomy grounded in cognitive‑bias theory, and (iii) evaluate the compatibility of DPPs with current EU data‑protection law, suggesting pathways for regulatory reform.
| Background point | Why it matters |
|---|---|
| Rise of manipulative UI – Users regularly encounter opaque settings, forced‑consent banners, and default‑heavy designs that funnel them toward data‑rich choices. | Demonstrates a systemic problem that undermines the GDPR’s consent requirements. |
| Legal gap – While the GDPR mandates “freely given, specific, informed” consent, it does not explicitly address how consent is obtained via UI. | Creates a loophole where controllers can obtain apparently lawful consent that is actually coerced. |
| Existing scholarship – Prior work on dark patterns (e.g., Brignull, Nouwens, CNIL) treats them mainly as marketing tricks; few connect them to privacy law. | Jarovsky’s work bridges HCI, behavioral economics, and EU law, offering a multidisciplinary lens. |
| Policy momentum – The EU Digital Services Act (DSA) and California CPRA have begun naming dark patterns, but their scope is limited. | Highlights the timeliness of a deeper legal analysis of DPPs. |
| Unique contribution – The paper (a) refines the definition to separate dark patterns from nudges, (b) maps each pattern to a specific cognitive bias, and (c) frames the GDPR’s fairness principle as the primary legal lever. | Provides a concrete analytical toolkit for regulators, designers, and scholars. |
“To be considered a dark pattern, the design must be manipulative and have as an objective goal to make the data subject worse off according to the observed criteria.”
| Category | Core Mechanism | Example(s) |
|---|---|---|
| Pressure | Coercive language or conditional access (“you must share X to use Y”). | “Require marketing consent to complete a purchase.” |
| Hinder | Deliberate friction—hidden settings, complex navigation, privacy‑invasive defaults. | “‘Accept all’ vs. a labyrinth of individual switches.” |
| Mislead | Ambiguous wording, double negatives, visual tricks (color, contrast). | “Green ‘deny’ button, red ‘accept’ button.” |
| Misrepresent | False claims of necessity or benefit. | “Claim data collection is legally required when it isn’t.” |
Each category is linked to a set of cognitive biases (e.g., default effect, framing, anchoring, social proof).
“DPPs breach the principle of fairness, for cumulatively: (a) not respecting reasonable expectations … (c) involving manipulation … (d) negatively affecting data subjects’ privacy.”
| Audience | Practical Applications |
|---|---|
| Privacy Regulators & Policymakers | Draft interpretative guidance that treats DPPs as violations of the fairness principle. Amend the DSA to remove the GDPR exemption for privacy‑related dark patterns. Incorporate the taxonomy into supervisory checklists for DPIA reviews. |
| Designers & Product Teams | Conduct an internal audit using the four‑category taxonomy to spot DPPs in UI flows. Replace pressure and mislead patterns with transparent, opt‑in consent dialogs. Adopt “privacy‑by‑design” checklists that explicitly test for default‑effect exploitation. |
| Legal Counsel & Compliance Officers | Re‑evaluate consent records for evidence of pressure or misrepresent tactics. Advise clients that consent obtained via any of the four categories may be deemed invalid under GDPR and CPRA precedents. Prepare arguments for fairness‑principle defenses in enforcement actions. |
| Researchers & Academics | Extend the taxonomy to other jurisdictions (e.g., Brazil’s LGPD, India’s PDPB). Empirically test the impact of each pattern on user behavior using eye‑tracking or A/B experiments. Explore the intersection of DPPs with algorithmic profiling and AI‑driven personalization. |
| Consumer Advocacy Groups | Use the taxonomy to produce “dark‑pattern scorecards” for popular apps/websites. Educate users about cognitive biases that make DPPs effective, encouraging critical UI scrutiny. |
Future‑research directions highlighted by the author
| Methodological Element | Description |
|---|---|
| Conceptual analysis | Synthesized literature from HCI, behavioral economics, and EU data‑protection law to craft a precise definition separating DPPs from nudges. |
| Taxonomy construction | Mapped a non‑exhaustive list of cognitive biases (anchoring, default effect, framing, etc.) to observable UI manipulations, producing four high‑level categories (Pressure, Hinder, Mislead, Misrepresent). |
| Legal doctrinal review | Examined GDPR Articles 6, 7, 25, Recitals 42‑45, the DSA, and the CPRA, interpreting how each regime treats consent and dark‑pattern‑like practices. |
| Case illustrations | Presented realistic user scenarios (Alice, Bob, Charlie, Danah) to demonstrate each pattern type and its privacy impact. |
| Normative argumentation | Leveraged the EU’s fairness principle and the PECL’s contract‑law concepts (mistake, fraud, pressure, hindrance) to argue for a legal re‑framing of DPPs. |
No empirical data (surveys, interviews, or experiments) were collected; the work is a theoretical‑legal synthesis supported by illustrative examples.

Subscribe to get the latest posts sent to your email.

Please take this short survey to help improve the KM4S web site. The survey is anonymous. Thank you for your feedback! (click on the screen anywhere (or on the x in the top right corner) to remove this pop-up)