Privacy Principles

W3C Draft TAG Finding

More details about this document
Latest editor's draft:
Commit history
Robin Berjon (The New York Times)
Jeffrey Yasskin (Google)
GitHub w3ctag/privacy-principles (pull requests, new issue, open issues)


Privacy is an essential part of the Web [ETHICAL-WEB]. This document provides definitions for privacy and related concepts that are applicable worldwide. It also provides a set of privacy principles that should guide the development of the Web as a trustworthy platform. Users of the Web would benefit from a stronger relationship between technology and policy, and this document is written to work with both.

Status of This Document

This is a preview

Do not attempt to implement this version of the specification. Do not reference this version as authoritative in any way. Instead, see for the Editor's draft.

This document is a Draft Finding of the Technical Architecture Group (TAG). It was prepared by the Web Privacy Principles Task Force, which was convened by the TAG. Publication as a Draft Finding does not imply endorsement by the TAG or by the W3C Membership.

This draft does not yet reflect the consensus of the TAG or the task force and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as anything other than a work in progress.

It will continue to evolve and the task force will issue updates as often as needed. At the conclusion of the task force, the TAG intends to adopt this document as a Finding.

1. Introduction

Privacy is an essential value of the Web ([ETHICAL-WEB], [design-principles]). In everyday life, people typically find it easy to assess whether a given flow of information is a violation of privacy or not [NYT-PRIVACY]. However, in the digital space, users struggle to understand how their data may be moved between contexts and how that may affect them. This is particularly true if they may be affected at a much later time and in completely different situations. Some actors are using this confusion to extract and exploit personal data at scale.

The goal of this document is to define principles that may prove useful in developing technology and policy that relate to privacy and personal data.

Personal data is covered by legal frameworks and this document recognises that existing data protection laws take precedence for legal matters. However, because the Web is global, we benefit from having shared concepts to guide its evolution as a system built for its users [RFC8890]. A clear and well-defined view of privacy on the Web, informed by research, can hopefully help all the Web's participants in different legal regimes. Our shared understanding is that the law is a floor, not a ceiling.

2. Definitions

This section provides a number of building blocks to create a shared understanding of privacy. Some of the definitions below build on top of the work in Tracking Preference Expression (DNT) [tracking-dnt].

2.1 People & Data

A user (also person or data subject) is any natural person.

We define personal data as any information relating to a person such that:

Data is permanently de-identified when there exists a high level of confidence that no human subject of the data can be identified, directly or indirectly (e.g., via association with an identifier, user agent, or device), by that data alone or in combination with other retained or available information, including as being part of a group. Note that further considerations relating to groups are covered in the Collective Issues in Privacy section.

Data is pseudonymous when:

This can ensure that pseudonymous data is used in a manner that provides a minimum degree of governance such that technical and procedural means to guarantee the maintenance of pseudonymity are preserved. Note that pseudonymity, on its own, is not sufficient to render data processing appropriate.

A vulnerable person is a person who, at least in the context of the processing being discussed, are unable to exercise sufficient self-determination for any consent they may provide to be receivable. This includes for example children, employees with respect to their employers, people in some situations of intellectual or psychological impairment, or refugees.

2.2 The Parties

A party is an entity that a person can reasonably understand as a single "thing" they're interacting with. Uses of this document in a particular domain are expected to describe how the core concepts of that domain combine into a user-comprehensible party, and those refined definitions are likely to differ between domains.

The first party is a party with which the user intends to interact. Merely hovering over, muting, pausing, or closing a given piece of content does not constitute a user's intent to interact with another party, nor does the simple fact of loading a party embedded in the one with which the user intends to interact. In cases of clear and conspicuous joint branding, there can be multiple first parties. The first party is necessarily a data controller of the data processing that takes places as a consequence of a user interacting with it.

A third party is any party other than the user, the first party, or a service provider acting on behalf of either the user or the first party.

A service provider or data processor is considered to be the same party as the entity contracting it to perform the relevant processing if it:

A data controller is a party that determines the means and purposes of data processing. Any party that is not a service provider is a data controller.

The Vegas Rule is a simple implementation of privacy in which "what happens with the first party stays with the first party." Put differently, it describes a situation in which the first party is the only data controller. Note that, while enforcing the Vegas Rule provides a rule of thumb describing a necessary baseline for appropriate data processing, it is not always sufficient to guarantee appropriate processing since the first party can process data inappropriately.

2.3 Acting on Data

A party processes data if it carries out operations on personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, sharing, dissemination or otherwise making available, selling, alignment or combination, restriction, erasure or destruction.

A party shares data if it provides it to any other party. Note that, under this definition, a party that provides data to its own service providers is not sharing it.

A party sells data when it shares it in exchange for consideration, monetary or otherwise.

2.4 Contexts and Privacy

The purpose of a given processing of data is an anticipated, intended, or planned outcome of this processing which is achieved or aimed for within a given context. A purpose, when described, should be specific enough to be actionable by someone familiar with the relevant context (ie. they could independently determine means that reasonably correspond to an implementation of the purpose).

The means are the general method of data processing through which a given purpose is implemented, in a given context, considered at a relatively abstract level and not necessarily all the way down to implementation details. Example: the user will have their preferences restored (purpose) by looking up their identifier in a preferences store (means).

A context is a physical or digital environment that a person interacts with for a purpose of their own (that they typically share with other person who interact with the same environment).

A context can be further described through:

A context carries context-relative informational norms that determine whether a given data processing is appropriate (if the norms are adhered to) or inappropriate (when the norms are violated). A norm violation can be for instance the exfiltration of personal data from a context or the lack of respect for transmission principles. When norms are respected in a given context, we can say that contextual integrity is maintained; otherwise that it is violated ([PRIVACY-IN-CONTEXT], [PRIVACY-AS-CI]).

We define privacy as a right to appropriate data processing. A privacy violation is, correspondingly, inappropriate data processing [PRIVACY-IN-CONTEXT].

Note that a first party can be comprised of multiple contexts if it is large enough that people would interact with it for more than one purpose. Sharing personal data across contexts is, in the overwhelming majority of cases, inappropriate.

Your cute little pup uses Poodle Naps to find comfortable places to snooze, and Poodle Fetch to locate the best sticks. Napping and fetching are different contexts with different norms, and sharing data between these contexts is a privacy violation despite the shared ownership of Naps and Fetch by the Poodle conglomerate.

Colloquially, tracking is understood to be any kind of inappropriate data collection.

Additionally, privacy labour is the practice of having a person carry out the work of ensuring data processing of which they are the subject is appropriate, instead of having the parties be responsible for that work as is more respectable.

2.5 User Agents

The user agent acts as an intermediary between a user and the web. The user agent is not a context because it is expected to align fully with its user and operate exclusively in that person's interest. It is not the first party. The user agent serves the user as a trustworthy agent: it always puts the user's interest first. In some occasions, this can mean protecting the user from themselves by preventing them from carrying out a dangerous decision, or by slowing down the user in their decision. For example, the user agent will make it difficult for the user to connect to a site if it can't verify that the site is authentic. It will check that the user really intends to expose a sensitive device to a page. It will prevent the user from consenting to the permanent monitoring of their behaviour. Its user agent duties include [TAKING-TRUST-SERIOUSLY]:

Duty of Protection
Protection requires user agents to actively protect a user's data, beyond simple security measures. It is insufficient to just encrypt at rest and in transit, but the user agent must also limit retention, help ensure that only strictly necessary data is collected, and require guarantees from those it is shared to.
Duty of Discretion
Discretion requires the user agent to make best efforts to enforce context-relative informational norms by placing limits on the flow and processing of personal data. Discretion is not confidentiality or secrecy: trust can be preserved even when the user agent shares some personal data, so long as it is done in an appropriately discreet manner.
Duty of Honesty
Honesty requires that the user agent try to give the user information that is relevant to them and that will increase the user's autonomy, as long as they can understand it and there's an appropriate time. This is almost never when the user is trying to do something else such as read a page or activate a feature. The duty of honesty goes well beyond that of transparency that is often included in older privacy regimes. Unlike transparency, honesty can't hide relevant information in complex out-of-band legal notices and it can't rely on very short summaries provided in a consent dialog. If the user has provided consent to processing of their personal data, the user agent should inform the user of ongoing processing, with a level of obviousness that is proportional to the reasonably foreseeable impact of the processing.
Duty of Loyalty
Because the user agent is a trustworthy agent, it is held to be loyal to the user in all situations, including in preference to the user agent's implementer. When a user agent carries out processing that is not directly in the user's interest but instead benefits another entity (such as the user agent's implementer) that behaviour is known as self-dealing. Behaviour can be self-dealing even if it is done at the same time as processing that is in the user's interest. Self-dealing is always inappropriate. Loyalty is the avoidance of self-dealing.

These duties ensure the user agent will care for the user. It is important to note that there is a subtle difference between care and data paternalism. Data paternalism claims to help in part by removing agency ("don't worry about it, so long as your data is with us it's safe, you don't need to know what we do with it, it's all good because we're good people") whereas care aims to support people by enhancing their agency and sovereignty.

In academic research, this relationship with a trustworthy agent is often described as "fiduciary" [FIDUCIARY-UA].

2.6 Identity on the Web

A person's identity is the set of characteristics that define them. Their identity in a context is the set of characteristics they present to that context. People frequently present different identities to different contexts, and also frequently share an identity among several contexts.

Cross-context recognition is the act of recognising that an identity in one context is the same person as an identity in another context. Cross-context recognition can at times be appropriate but anyone who does it needs to be careful not to apply the norms of one context in ways that violate the norms around use of information acquired in a different context. (For example, if you meet your therapist at a cocktail party, you expect them to have rather different discussion topics with you than they usually would, and possibly even to pretend they do not know you.) This is particularly true for vulnerable people as recognising them in different contexts may force their vulnerability into the open.

In computer systems and on the Web, an identity seen by a particular website is typically assigned an identifier of some type, which makes it easier for an automated system to store data about that user.

Best Practice 1: User agents should support their users' autonomy by helping them present their intended identity to each context that they visit.

To do this, user agents have to make some assumptions about the borders between contexts. By default, user agents define a machine-enforceable context or partition as:

Even though this is the default, user agents are free to restrict this context as their users need. For example, some user agents may help their users present different identities to subdivisions of a single site.

Issue 1: Figure out the default privacy boundary for the web

There is disagreement about whether user agents may also widen their machine-enforceable contexts. For example, some user agents might want to help their users present a single identity to multiple sites that the user understands represent a single party, or to a site across multiple installations.

User agents should prevent their user from being recognized across machine-enforceable contexts unless the user intends to be recognized. This is a "should" rather than a "must" because there are many cases where the user agent isn't powerful enough to prevent recognition. For example if two or more services that a user needs to use insist that the user share a difficult-to-forge piece of their identity in order to use the services, it's the services behaving inappropriately rather than the user agent.

If a site includes multiple contexts whose norms indicate that it's inappropriate to share data between the contexts, the fact that those distinct contexts fall inside a single machine-enforceable context doesn't make sharing data or recognizing identities any less inappropriate.

2.7 User Control and Autonomy

A person's autonomy is their ability to make decisions of their own volition, without undue influence from other parties. People have limited intellectual resources and time with which to weigh decisions, and by necessity rely on shortcuts when making decisions. This makes their privacy preferences malleable [PRIVACY-BEHAVIOR] and susceptible to manipulation [DIGITAL-MARKET-MANIPULATION]. A person's autonomy is enhanced by a system or device when that system offers a shortcut that aligns more with what that person would have decided given arbitrary amounts of time and relatively unfettered intellectual ability; and autonomy is decreased when a similar shortcut goes against decisions made under ideal conditions.

Affordances and interactions that decrease autonomy are known as dark patterns. A dark pattern does not have to be intentional, the deceptive effect is sufficient to define them [DARK-PATTERNS], [DARK-PATTERN-DARK].

Because we are all subject to motivated reasoning, the design of defaults and affordances that may impact user autonomy should be the subject of independent scrutiny. Implementers are enjoined to be particularly cautious to avoid slipping into data paternalism.

Given the sheer volume of potential data-related decisions in today's data economy, complete informational self-determination is impossible. This fact, however, should not be confused with the contention that privacy is dead. Careful design of our technological infrastructure can ensure that users' autonomy as pertaining to their own data is enhanced through appropriate defaults and choice architectures.

In the 1970s, the Fair Information Practices or FIPs were elaborated in support of individual autonomy in the face of growing concerns with databases. The FIPs assume that there is sufficiently little data processing taking place that any person will be able to carry out sufficient diligence to enable autonomy in their decision-making. Since they entirely offload the privacy labour to users and assume perfect, unfettered autonomy, the FIPs do not forbid specific types of data processing but only place them under different procedural requirements. Such an approach is appropriate for parties that are processing data in the 1970s.

One notable issue with procedural approaches to privacy is that they tend to have the same requirements in situations where the user finds themselves in a significant asymmetry of power with a party — for instance the user of an essential service provided by a monopolistic platform — and those where user and parties are very much on equal footing, or even where the user may have greater power, as is the case with small businesses operating in a competitive environment. It further does not consider cases in which one party may coerce other parties into facilitating its inappropriate practices, as is often the case with dominant players in advertising [CONSENT-LACKEYS] or in content aggregation [CAT].

Reference to the FIPs survives to this day. They are often referenced as transparency and choice, which, in today's digital environment, is often a strong indication that inappropriate processing is being described.

Agnes from Wandavision winking 'Transparency and choice'
Figure 1 A method of privacy regulation which promises honesty and autonomy but delivers neither. [CONFIDING].

2.9 Collective Issues in Privacy

When designing Web technology, we naturally pay attention to potential impacts on the person using the Web through their user agent. In addition to potential individual harms we also pay heed to collective effects that emerge from the accumulation of individual actions as influenced by entities and the structure of technology.

Note that in evaluating impact, we deliberately ignore what implementers or specifiers may have intended and only focus on outcomes. This framing is known as POSIWID, or "the Purpose Of a System Is What It Does".

The collective problem of privacy is known as legibility. Legibility concerns population-level data processing that may impact populations or individuals, including in ways that people could not control even under the optimistic assumptions of the FIPs. For example, based on population-level analysis, a company may know that site.example is predominantly visited by people of a given race or gender, and decide not to run its job ads there. Visitors to that page are implicitly having their data processed in inappropriate ways, with no way to discover the discrimination or seek relief [DEMOCRATIC-DATA].

What we consider is therefore not just the relation between the people who expose themselves and the entities that invite that disclosure [RELATIONAL-TURN], but also between the people who expose themselves and those who do not but may find themselves recognised as such indirectly anyway. One key understanding here is that such relations may persists even when data is permanently de-identified.

Legibility practices can be legitimate or illegitimate depending on the context and on the norms that apply in that context. Typically, a legibility practice may be legitimate if it is managed through an acceptable process of collective governance. For example, it is often considered legitimate for a government, under the control of its citizens, to maintain a database of license plates for the purpose of enforcing the rules of the road. It would be illegitimate to observe the same license plates near places of worship to build a database of religious identity.

Legibility is often used to order information about the world. This can notably create problems of reflexivity and of autonomy.

Problems of reflexivity occur when the ordering of information about the world used to produce legibility finds itself changing the way in which the world operates. This can produce self-reinforcing loops that can have deleterious effects both individual and collective [SEEING-LIKE-A-STATE].

Issues of autonomy occur depending on the manner in which legibility is implemented. When legibility is used to order the world following rules set by the user or following methods subject to public scrutiny and governance models with strong checks and balances (such as a newspaper's editorial decisions), then it will enhance user autonomy and tend to be legitimate. When it is done in the user's stead and without governance, it decreases user autonomy and tends to be illegitimate.

Data governance refers to the rules and processes for how data is processed in any given context. How data is governed describes who has power to make decisions over data and how [DATA-FUTURES-GLOSSARY].

In general, collective issues in data require collective solutions. The proper goal of data governance at the standards-setting level is the development of structural controls in user agents and the provision of institutions that can handle population-level problems in data. Governance will often struggle to achieve its goals if it works primarily by increasing individual control over data. A collective approach reduces the cost of control.

Collecting data at large scales can have significant pro-social outcomes. Problems tend to emerge when entities take part in dual-use collection in which data is processed for collective benefit but also for self-dealing purposes that may degrade welfare. The self-dealing purposes will be justified as bankrolling the pro-social outcomes, which, absent collective oversight, cannot be considered to support claims to legitimacy for such legibility. It is vital for standards-setting organisations to establish not just purely technical devices but techno-social systems that can govern data at scale.

3. Privacy principles by category

User agents should attempt to defend their users from a variety of high-level threats or attacker goals, described in this section.

These threats are an extension of the ones discussed by [RFC6973].

Surveillance is the observation or monitoring of an individual’s communications or activities. See RFC6973§5.1.1.
Data Compromise
End systems that do not take adequate measures to secure data from unauthorized or inappropriate access. See RFC6973§5.1.2.
Intrusion consists of invasive acts that disturb or interrupt one’s life or activities. See RFC6973§5.1.3.
Misattribution occurs when data or communications related to one individual are attributed to another. See RFC6973§5.1.4.
Correlation is the combination of various pieces of information related to an individual or that obtain that characteristic when combined. See RFC6973§5.2.1.
The inference, evaluation, or prediction of an individual's attributes, interests, or behaviours.
Identification is the linking of information to a particular individual, even if the information isn't linked to that individual's real-world identity (e.g. their legal name, address, government ID number, etc.). Identifying someone allows a system to treat them differently from others, which can be inappropriate depending on the context. See RFC6973§5.2.2.
Secondary Use
Secondary use is the use of collected information about an individual without the individual’s consent for a purpose different from that for which the information was collected. See RFC6973§5.2.3.
Disclosure is the revelation of information about an individual that affects the way others judge the individual. See RFC6973§5.2.4.
Exclusion is the failure to allow individuals to know about the data that others have about them and to participate in its handling and use. See RFC6973§5.2.5.

These threats combine into the particular concrete threats we want web specifications to defend against, described in subsections here:

3.1 Unwanted cross-context recognition

Contributes to surveillance, correlation, and identification.

As described in § 2.6 Identity on the Web, cross-context recognition can sometimes be appropriate, but users need to be able to control when websites do it as much as possible.

Best Practice 2: User agents should ensure that, if a person visits two or more web pages from different partitions, that the pages cannot quickly determine that the visits probably came from the same person, for any significant or involuntary fraction of the people who use the web, unless the person explicitly expresses the same identity to the visits, or preventing this correlation would break a technical feature that is fundamental to the Web.

Partitions are separated in two ways that lead to distinct kinds of user-visible recognition. When their divisions between different sites are violated, that leads to § 3.1.2 Unwanted cross-site recognition. When a violation occurs at their other divisions, for example between different browser profiles or at the point a user clears their cookies and site storage, that leads to § 3.1.1 Same-site recognition.

3.1.1 Same-site recognition

The web platform offers many ways for a website to recognize that a user is using the same identity over time, including cookies, localStorage, indexedDB, CacheStorage, and other forms of storage. This allows sites to save the user's preferences, shopping carts, etc., and users have come to expect this behavior in some contexts.

A privacy harm occurs if the user reasonably expects that they'll be using a different identity on a site, but the site discovers and uses the fact that the two or more visits probably came from the same user anyway.

User agents can't, in general, determine exactly where intra-site context boundaries are, or how a site allows a user to express that they intend to change identities, so they're not responsible to enforce that sites actually separate user identities at those boundaries. The principle here instead requires separation at partition boundaries.

Cross-partition recognition is generally accomplished by either "supercookies" or browser fingerprinting.

Supercookies occur when a browser stores data for a site but makes that data more difficult to clear than other cookies or storage. Fingerprinting Guidance § Clearing all local state discusses how specifications can help browsers avoid this mistake.

Fingerprinting consists of using attributes of the user's browser and platform that are consistent between two or more visits and probably unique to the user.

The attributes can be exposed as information about the user's device that is otherwise benign (as opposed to § 3.2 Sensitive information disclosure). For example:

  • What are the user's language and time zone?
  • What size is the user's window?
  • What system preferences has the user set? Dark mode, serif font, etc...
  • ...

See [fingerprinting-guidance] for how to mitigate this threat.

3.1.2 Unwanted cross-site recognition

A privacy harm occurs if a site determines with high probability and uses the fact that a visit to that site comes from the same person as another visit to a different site, unless the person could reasonably expect the sites to discover this. Traditionally, sites have accomplished this using cross-site cookies, but it can also be done by having a user navigate to a link that has been decorated with a user ID, collecting the same piece of identifying information on both sites, or by correlating the timestamps of an event that occurs nearly-simultaneously on both sites.

3.2 Sensitive information disclosure

Contributes to correlation, identification, secondary use, and disclosure.

Many pieces of information about a user could cause privacy harms if disclosed. For example:

A particular piece of information may have different sensitivity for different users. Language preferences, for example, might typically seem innocent, but also can be an indicator of belonging to an ethnic minority. Precise location information can be extremely sensitive (because it's identifying, because it allows for in-person intrusions, because it can reveal detailed information about a person's life) but it might also be public and not sensitive at all, or it might be low-enough granularity that it is much less sensitive for many users.

When considering whether a class of information is likely to be sensitive to users, consider at least these factors:

Issue(16): This description of what makes information sensitive still needs to be refined.

3.3 Unexpected profiling

Contributes to surveillance, correlation, identification, and singling-out / discrimination.

Unexpected profiling occurs when a site is able to learn attributes or characteristics about a person, that a) the site visitor did not intend the site to learn, and b) the site visitor reasonably could not anticipate a site would be able to learn.

Profiling contributes to, but is distinct from, other privacy risks discussed in this document. For example, unexpected profiling may contribute to § 3.1.1 Same-site recognition, by adding stable and semi-identifying information that can contribute to browser fingerprinting. Unexpected profiling is distinct from same-site recognition though, in that a person may wish to not share some kinds of information about themselves even in the presence of guarantees that such information will not lead to them being re-identified.

Similarly, unexpected profiling is related to § 3.2 Sensitive information disclosure, but the former is a superset of the latter; all cases of unexpected sensitive information disclosure are examples of unexpected profiling, but Web users may have attributes or characteristics about themselves that are not universally thought of as "sensitive", but which they never the less do not wish to share with the sites they visit. People may wish to not share these "non-sensitive" characteristics for a variety of reasons (e.g., a person may worry that their ideas of what counts as "sensitive" is different from others, a person might might be ashamed or uncomfortable about a character trait or they might simply not wish to be profiled).

Profiling occurs for many reasons. It can be used to facilitate price discrimination or offer manipulation, to make inferences about what products or services users might be more likely to purchase, or more generally, for a site to learn attributes about the Web user the Web user does not intend to share. Unexpected profiling can also contribute to feelings of powerlessness and loss of agency.

A privacy harm occurs if a site learns information about the user that the user reasonably expected the site would not be able to learn, regardless of whether that information aids (re)identification or is from a sensitive category of information (however defined).

Peter is a furry. Despite knowing that there are thousands of other furries on the internet, and despite using a browser with robust browser fingerprinting protections, and despite the growing cultural acceptance of furries, Peter does not want (most) sites to learn or personalize content around his furry-interest.

3.4 Intrusive behavior

See intrusion.

Privacy harms don't always come from a site learning things. For example it is intrusive for a site to

if the user doesn't intend for it to do so.

3.5 Powerful capabilities

Contributes to misattribution.

For example, a site that sends SMS without the user's intent could cause them to be blamed for things they didn't intend.

A. References

A.1 Informative references

Content Aggregation Technology (CAT). Robin Berjon; Justin Heideman. URL:
Confiding in Con Men: U.S. Privacy Law, the GDPR, and Information Fiduciaries. Lindsey Barrett. URL:
Publishers tell Google: We're not your consent lackeys. Rebecca Hill. The Register. URL:
What Makes a Dark Pattern… Dark? Design Attributes, Normative Considerations, and Measurement Methods. Arunesh Mathur; Jonathan Mayer; Mihir Kshirsagar. URL:
Dark patterns: past, present, and future. Arvind Narayanan; Arunesh Mathur; Marshini Chetty; Mihir Kshirsagar. ACM. URL:
Data Futures Lab Glossary. Mozilla Insights. Mozilla Foundation. URL:
Democratic Data: A Relational Theory For Data Governance. Salomé Viljoen. Yale Law Journal. URL:
Web Platform Design Principles. Sangwhan Moon. W3C. 23 September 2021. W3C Note. URL:
Digital Market Manipulation. Ryan Calo. George Washington Law Review. URL:
W3C TAG Ethical Web Principles. Daniel Appelquist; Hadley Beeman. W3C. 27 October 2020. TAG Finding. URL:
The Fiduciary Duties of User Agents. Robin Berjon. URL:
Mitigating Browser Fingerprinting in Web Specifications. Nick Doty. W3C. 28 March 2019. W3C Note. URL:
General Data Protection Regulations (GDPR) / Regulation (EU) 2016/679. European Parliament and Council of European Union. URL:
Global Privacy Control (GPC). Robin Berjon; Sebastian Zimmeck; Ashkan Soltani; David Harbage; Peter Snyder. W3C. URL:
HTML Standard. Anne van Kesteren; Domenic Denicola; Ian Hickson; Philip Jägenstedt; Simon Pieters. WHATWG. Living Standard. URL:
Indexed Database API. Nikunj Mehta; Jonas Sicking; Eliot Graff; Andrei Popescu; Jeremy Orlow; Joshua Bell. W3C. 8 January 2015. W3C Recommendation. URL:
How The New York Times Thinks About Your Privacy. NYT Open. URL:
Privacy As Contextual Integrity. Helen Nissenbaum. Washington Law Review. URL:
Privacy and Human Behavior in the Age of Information. Alessandro Acquisti; Laura Brandimarte; George Loewenstein. Science. URL:
Privacy in Context. Helen Nissenbaum. SUP. URL:
Target Privacy Threat Model. Jeffrey Yasskin; Tom Lowenthal. W3C PING. URL:
Public Suffix List Problems. Ryan Sleevi. URL:
A Relational Turn for Data Protection?. Neil Richards; Woodrow Hartzog. URL:
HTTP State Management Mechanism. A. Barth. IETF. April 2011. Proposed Standard. URL:
Privacy Considerations for Internet Protocols. A. Cooper; H. Tschofenig; B. Aboba; J. Peterson; J. Morris; M. Hansen; R. Smith. IETF. July 2013. Informational. URL:
The Internet is for End Users. M. Nottingham. IETF. August 2020. Informational. URL:
Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. James C. Scott. URL:
Service Workers 1. Alex Russell; Jungkee Song; Jake Archibald; Marijn Kruisselbrink. W3C. 19 November 2019. W3C Candidate Recommendation. URL:
Taking Trust Seriously in Privacy Law. Neil Richards; Woodrow Hartzog. URL:
Tracking Preference Expression (DNT). Roy Fielding; David Singer. W3C. 17 January 2019. W3C Note. URL:
Developer Policy - Twitter Developers. Twitter. URL: