Copyright © 2023 World Wide Web Consortium. W3C® liability, trademark and permissive document license rules apply.
Privacy is an essential part of the Web. This document provides definitions for privacy and related concepts that are applicable worldwide as well as a set of privacy principles that should guide the development of the Web as a trustworthy platform. People using the Web would benefit from a stronger relationship between technology and policy, and this document is written to work with both.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This document is a Draft Finding of the Technical Architecture Group (TAG) which we are releasing as a Draft Note. The intent is for this document to become a W3C Statement. It was prepared by the Web Privacy Principles Task Force, which was convened by the TAG. Publication as a Draft Finding or Draft Note does not imply endorsement by the TAG or by the W3C Membership.
This draft does not yet reflect the consensus of the TAG or the task force and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as anything other than a work in progress.
It will continue to evolve and the task force will issue updates as often as needed. At the conclusion of the task force, the TAG intends to adopt this document as a Finding.
This document was published by the Technical Architecture Group as an Editor's Draft.
Publication as an Editor's Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 2 November 2021 W3C Process Document.
This document elaborates on the privacy principle from the W3C TAG Ethical Web Principles: "Security and privacy are essential." While it focuses on privacy, this should not be taken as an indication that privacy is always more important than other ethical web principles, and this document doesn't address how to balance the different ethical web principles if they come into conflict.
Privacy on the Web is primarily regulated by two forces: the architectural capabilities that the Web platform exposes (or does not), and laws in the various jurisdictions where the Web is used ([New-Chicago-School]). These regulatory mechanisms are separate; a law in one country does not (and should not) change the architecture of the whole Web, and likewise Web specifications cannot override any given law (although they can affect how easy it is to create and enforce law). The Web is not merely an implementation of a particular legal privacy regime; it has distinct features and guarantees driven by shared values that often exceed legal requirements for privacy.
However, the overall goal of privacy on the Web is served best when technology and law complement each other. This document seeks to establish shared concepts as an aid to technical efforts to regulate privacy on the web. It may also be useful in pursuing alignment with and between legal regulatory regimes.
Our goal for this document is not to cover all possible privacy issues, but rather to provide enough background to support the Web community in making informed decisions about privacy and in weaving privacy into the architecture of the Web.
Few architectural principles are absolute, and privacy is no exception: privacy can come into tension with other desirable properties of an ethical architecture, and when that happens the Web community will have to work together to strike the right balance.
The primary audiences for this document are
Additional audiences include:
This document is intended to help its audiences address privacy concerns as early as possible in the life cycle of a new Web standard or feature, or in the development of Web products. Beginning with privacy in mind will help avoid the need to add special cases later to address unforeseen but predictable issues or to build systems that turn out to be unacceptable to users.
Because this document guides privacy reviews of new standards, authors of web specifications should consult it early in the design to make sure their feature passes the review smoothly.
This is a document containing technical guidelines. However, in order to put those guidelines in context we must first define some terms and explain what we mean by privacy.
The Web is for everyone ([For-Everyone]). It is "a platform that helps people and provides a net positive social benefit" ([ETHICAL-WEB], [design-principles]). One of the ways in which the Web serves people is by protecting them in the face of asymmetries of power, and this includes establishing and enforcing rules to govern the power of data.
The Web is a social and technical system made up of information flows. Because this document is specifically about privacy as it applies to the Web, it focuses on privacy with respect to information flows.
Information is power. It can be used to predict and to influence people, as well as to design online spaces that control people's behaviour. The collection and processing of information in greater volume, with greater precision and reliability, with increasing interoperability across a growing variety of data types, and at intensifying speed is leading to a concentration of power that threatens private and public liberties. What's more, automation and the increasing computerisation of all aspects of our lives both increase the power of information and decrease the cost of a number of intrusive behaviours that would be more easily kept in check if the perpetrator had to be in the same room as the victim.
These asymmetries of information and of automation create significant asymmetries of power.
Data governance is the system of principles that regulate information flows. When people are involved in information flows, data governance determines how these principles constrain and distribute the power of information between different actors. Such principles describe the ways in which different actors may, must, or must not produce or process flows of information from, to, or about other actors ([GKC-Privacy], [IAD]).
It is important to keep in mind that not all people are equal in how they can resist the imposition of unfair principles: some people are more vulnerable and therefore in greater need of protection. This document focuses on the impact that differences in information power can have on people, but those differences can also impact other actors, such as companies or governments.
Principles vary from context to context ([Understanding-Privacy], [Contextual-Integrity]): people have different expectations of privacy at work, at a café, or at home for instance. Understanding and evaluating a privacy situation is best done by clearly identifying:
It is important to keep in mind that there are always privacy principles and that all of them imply different power dynamics. Some sets of principles may be more permissive, but that does not make them neutral — it means that they support the power dynamic that comes with permissive processing. We must therefore determine which principles best align with ethical Web values in Web contexts ([ETHICAL-WEB], [Why-Privacy]).
Information flows as used in this document refer to information exchanged or processed by actors. The information itself need not necessarily be personal data. Disruptive or interruptive information flowing to a person is in scope, as is de-identified data that can be used to manipulate people or that was extracted by observing people's behaviour on a website.
Information flows need to be understood from more than one perspective: there is the flow of information about a person (the subject) being processed or transmitted to any other actor, and there is the flow of information towards a person (the recipient). Recipients can have their privacy violated in multiple ways such as unexpected shocking images, loud noises while they intend to sleep, manipulative information, interruptive messages when their focus is on something else, or harassment when they seek social interactions.
On the Web, information flows may involve a wide variety of actors that are not always recognizable or obvious to a user within a particular interaction. Visiting a website may involve the actors that operate that site and its functionality, but also actors with network access, which may include: Internet service providers; other network operators; local institutions providing a network connection including schools, libraries or universities; government intelligence services; malicious hackers who have gained access to the network or the systems of any of the other actors. High-level threats including surveillance may be pursued by these actors. Pervasive monitoring, a form of large-scale, indiscriminate surveillance, is a known attack on the privacy of users of the Internet and the Web [RFC7258].
Information flows may also involve other people — for example, other users of a site — which could include friends, family members, teachers, strangers, or government officials. Some threats to privacy, including both disclosure and harassment, may be particular to the other people involved in the information flow.
A person's autonomy is their ability to make decisions of their own personal will, without undue influence from other actors. People have limited intellectual resources and time with which to weigh decisions, and by necessity rely on shortcuts when making decisions. This makes their preferences, including privacy preferences, malleable and susceptible to manipulation ([Privacy-Behavior], [Digital-Market-Manipulation]). A person's autonomy is enhanced by a system or device when that system offers a shortcut that aligns more with what that person would have decided given arbitrary amounts of time and relatively unlimited intellectual ability; and autonomy is decreased when a similar shortcut goes against decisions made under such ideal conditions.
Affordances and interactions that decrease autonomy are known as deceptive patterns (or dark patterns). A deceptive pattern does not have to be intentional ([Dark-Patterns], [Dark-Pattern-Dark]).
Because we are all subject to motivated reasoning, the design of defaults and affordances that may impact autonomy should be the subject of independent scrutiny.
Given the large volume of potential data-related decisions in today's data economy, complete informational self-determination is impossible. This fact, however, should not be confused with the idea that privacy is dead. Studies show that people remain concerned over how their data is processed, feeling powerless and like they have lost agency ([Privacy-Concerned]). Careful design of our technological infrastructure can ensure that people's autonomy with respect to their own data is enhanced through appropriate defaults and choice architectures.
Several kinds of mechanisms exist to enable people to control how they interact with systems in the world. Mechanisms that increase the number of purposes for which their data is being processed or the amount their data is processed are referred to as opt-in or consent. Mechanisms that decrease this number of purposes or amount of processing are known as opt-out.
When deployed thoughtfully, these mechanisms can enhance people's autonomy. Often, however, they are used as a way to avoid putting in the difficult work of deciding which types of processing are appropriate and which are not, offloading privacy labour to the people using a system.
In specific cases, people should be able to consent to data sharing that would otherwise be restricted, such as having their identity or reading history shared across contexts. Actors need to take care that their users are informed when granting this consent and aware enough about what's going on that they can know to revoke their consent when they want to. Consent is comparable to the general problem of permissions on the Web platform. Both consent and permissions should be requested in a way that lets people delay or avoid answering if they're trying to do something else. If either results in persistent data access, there should be an indicator that lets people notice and that lets them turn off the access if it has lasted longer than they want. In general, providing consent should be rare, intentional, and temporary.
When an opt-out mechanism exists, it should preferably be complemented by a global opt-out mechanism. The function of a global opt-out mechanism is to rectify the automation asymmetry whereby service providers can automate data processing but people have to take manual action to prevent it. A good example of a global opt-out mechanism is the Global Privacy Control [GPC].
Conceptually, a global opt-out mechanism is an automaton operating as part of the user agent, which is to say that it is equivalent to a robot that would carry out a person's bidding by pressing an opt-out button with every interaction that the person has with a site, or more generally conveys an expression of the person's rights in a relevant jurisdiction. (For instance, the person may be objecting to processing based on legitimate interest, withdrawing consent to specific purposes, or requesting that their data not be sold or shared.) It should be noted that, since a global opt-out signal is reaffirmed automatically with every interaction, it will take precedence in terms of specificity over any general obtention of consent by a site, and only superseded by specific consent obtained through a deliberate action taken by the user with the intent of overriding their global opt-out.
Privacy labour is the practice of having a person carry out the work of ensuring data processing of which they are the subject or recipient is appropriate, instead of putting the responsibility on the actors who are doing the processing. Data systems that are based on asking people for their consent tend to increase privacy labour.
More generally, implementations of privacy are often dominated by self-governing approaches that offload labour to people. This is notably true of the regimes descended from the Fair Information Practices (FIPs), a loose set of principles initially elaborated in the 1970s in support of individual autonomy in the face of growing concerns with databases. The FIPs generally assume that there is sufficiently little data processing taking place that any person will be able to carry out sufficient diligence to enable autonomy in their decision-making. Since they offload the privacy labour to people and assume perfect, unlimited autonomy, the FIPs do not forbid specific types of data processing but only place them under different procedural requirements. This approach is no longer appropriate.
One notable issue with procedural, self-governing approaches to privacy is that they tend to have the same requirements in situations where people find themselves in a significant asymmetry of power with another actor — for instance a person using an essential service provided by a monopolistic platform — and those where a person and the other actor are very much on equal footing, or even where the person may have greater power, as is the case with small businesses operating in a competitive environment. They further do not consider cases in which one actor may coerce other actors into facilitating its inappropriate practices, as is often the case with dominant players in advertising or in content aggregation ([Consent-Lackeys], [CAT]).
Reference to the FIPs survives to this day. They are often referenced as "transparency and choice", which, in today's digital environment, is often an indication that inappropriate processing is being described.
Privacy principles are defined through social processes and, because of that, the applicable definition of privacy in a given context can be contested ([Privacy-Contested]). This makes privacy a problem of collective action ([GKC-Privacy]). Group-level data processing may impact populations or individuals, including in ways that people could not control even under the optimistic assumptions of consent.
What we consider is therefore not just the relation between the people who share data and the actors that invite that sharing ([Relational-Turn]), but also between the people who may find themselves categorised indirectly as part of a group even without sharing data. One key understanding here is that such relations may persist even when data is de-identified. What's more, such categorisation of people, voluntary or not, changes the way in which the world operates. This can produce self-reinforcing loops that can damage both individuals and groups ([Seeing-Like-A-State]).
In general, collective issues in data require collective solutions. Web standards help with data governance by defining structural controls in user agents, ensuring that researchers and regulators can discover group-level abuse, and establishing or delegating to institutions that can handle issues of privacy. Governance will often struggle to achieve its goals if it works primarily by increasing individual control instead of by collective action.
Collecting data at large scales can have significant pro-social outcomes. Problems tend to emerge when actors process data for collective benefit and for disloyal purposes at the same time. The disloyal purposes are often justified as bankrolling the pro-social outcomes but this requires collective oversight to be appropriate.
There are different ways for people to become members of a group. Either they can join it deliberately, making it a self-constituted group such as when joining a club, or they can be classified into it by an external actor, typically a bureaucracy or its computerised equivalent ([Beyond-Individual]). In the latter case, people may not be aware that they are being grouped together, and the definition of the group may not be intelligible (for instance if it is created from opaque machine learning techniques).
Protecting group privacy can take place at two different levels. The existence of a group or at least its activities may need to be protected even in cases in which its members are guaranteed to remain anonymous. We refer to this as "group privacy." Conversely, people may wish to protect knowledge that they are members of the group even though the existence of the group and its actions may be well known (eg. membership in a dissidents movement under authoritarian rule), which we call "membership privacy". An example privacy violation for the former case is the fitness app Strava that did not reveal individual behaviour or legal identity but published heat maps of popular running routes. In doing so, it revealed secret US bases around which military personnel took frequent runs ([Strava-Debacle], [Strava-Reveal-Military]).
When people do not know that they are members of a group, when they cannot easily find other members of the group so as to advocate for their rights together, or when they cannot easily understand why they are being categorised into a given group, their ability to protect themselves through self-governing approaches to privacy is largely eliminated.
One common problem in group privacy is when the actions of one member of a group reveal information that other members would prefer were not shared in this way (or at all). For instance, one person may publish a picture of an event in which they are featured alongside others while the other people captured in the same picture would prefer their participation not to be disclosed. Another example of such issues are sites that enable people to upload their contacts: the person performing the upload might be more open to disclosing their social networks than the people they are connected to are. Such issues do not necessarily admit simple, straightforward solutions but they need to be carefully considered by people building websites.
While transparency rarely helps enough to inform the individual choices that people may make, it plays a critical role in letting researchers and reporters inform our collective decision-making about privacy principles. This consideration extends the TAG's resolution on a Strong and Secure Web Platform to ensure that "broad testing and audit continues to be possible" where information flows and automated decisions are involved.
Such transparency can only function if there are strong rights of access to data (including data derived from one's personal data) as well as mechanisms to explain the outcomes of automated decisions.
A user agent acts as an intermediary between a person (its user) and the web. User agents implement, to the extent possible, the principles that collective governance establishes in favour of individuals. They seek to prevent the creation of asymmetries of information, and serve their user by providing them with automation to rectify automation asymmetries. Where possible, they protect their user from receiving intrusive messages.
The user agent is expected to align fully with the person using it and to operate exclusively in that person's interest. It is not the first party. The user agent serves the person as a trustworthy agent: it always puts that person's interest first. In some occasions, this can mean protecting that person from themselves by preventing them from carrying out a dangerous decision, or by slowing down the person in their decision. For example, the user agent will make it difficult for someone to connect to a site if it can't verify that the site is authentic. It will check that that person really intends to expose a sensitive device to a page. It will prevent that person from consenting to the permanent monitoring of their behaviour. Its user agent duties include ([Taking-Trust-Seriously]):
These duties ensure the user agent will care for its user. In academic research, this relationship with a trustworthy agent is often described as "fiduciary" ([Fiduciary-Law], [Fiduciary-Model], [Taking-Trust-Seriously]; see [Fiduciary-UA] for a longer informal discussion). Some jurisdictions may have a distinct legal meaning for "fiduciary." ([Fiduciary-Law])
Many of the principles described in the rest of this document extend the user agent's duties and make them more precise.
While privacy principles are designed to work together and support each other, occasionally a proposal to improve how a system follows one privacy principle may reduce how well it follows another principle.
Given any initial design that doesn't perfectly satisfy all principles, there are usually some other designs that improve the situation for some principles without sacrificing anything about the other principles. Work to find those designs.
Another way to say this is to look for Pareto improvements before starting to trade off between principles.
Once one is choosing between different designs at the Pareto frontier, the choice of which privacy principles to prefer is complex and depends heavily on the details of each particular situation. Note that people's privacy can also be in tension with non-privacy concerns. As discussed in the W3C TAG Ethical Web Principles, "it is important to consider the context in which a particular technology is being applied, the expected audience(s) for the technology, who the technology benefits and who it may disadvantage, and any power dynamics involved" ([Ethical-Web]). Despite this complexity, there is a basic ground rule to follow:
This is a special case of the more general principle that data should not be used for more purposes than the data's subjects understood it was being collected for.
Services sometimes use people's data in order to protect those or other people. A service that does this should explain what data it's using for this purpose. It should also say how it might use or share a person's data if it believes that person has violated the service's rules.
It is attractive to say that if someone violates the rules of a service they're using, then they sacrifice a proportionate amount of their privacy protections, but
The following examples illustrate some of the tensions:
As indicated above, different contexts require different principles. This section describes a set of principles designed to apply to the Web context in general. The Web is a big place, and we fully expect more specific contexts of the Web to add their own principles to further constrain information flows.
To the extent possible, user agents are expected to enforce these principles. However, this is not always possible and additional enforcement mechanisms are needed. One particularly salient issue is that a context is not defined in terms of who owns or controls it. Sharing data between different contexts of a single company is just as much a privacy violation as if the same data were shared between unrelated actors.
A person's identity is the set of characteristics that define them. Their identity in a context is the set of characteristics they present in that context. People frequently present different identities to different contexts, and also frequently share an identity among several contexts. People may also wish to present an ephemeral or anonymous identity, which is just a set of characteristics that is too small or unstable to be useful for following them through time.
It is important to keep in mind that a person's identities may often be distinct from whatever legal identity or identities they may hold.
Recognition is the act of realising that a given identity corresponds to the same person as another identity which may have been observed either in another context or in the same context but at a different time. A person can be recognized whether or not their legal identity or characteristics of their legal identity are included in the recognition.
In order to uphold the above principle, sometimes a user agent needs to prevent recognition, for instance so that one site can't learn anything about its user's behavior on another site. Other times, the user agent needs to support recognition, for instance to help its user prove to one site that they have a particular identity on another site. Similarly, a user agent can help its user to separate or communicate identity across repeat visits to the same site.
There are several types of recognition that may take place. These rely on different methods and present different challenges.
Cross-context recognition is recognition between different contexts. It contributes to surveillance, correlation, and identification.
Cross-context recognition is only appropriate when the person being recognized can reasonably expect that recognition to happen and can control whether it does. Note that a person can use a piece of identifying information in two different contexts (e.g. their email or phone number) without that implying that they're using the same identity in both contexts. Unless there's some other indication that they intended to use a single identity, it is inappropriate to recognize them using that information, or to seek extra identifying information to help with cross-context recognition.
Systems which recognize people across contexts need to be careful not to apply the principles of one context in ways that violate the principles around use of information acquired in a different context. This is particularly true for vulnerable people, as recognising them in different contexts may force traits into the open that reveal their vulnerability. For example, if you meet your therapist at a party, you expect them to have different discussion topics with you than they usually would, and possibly even to pretend they don't know you.
Cross-site recognition is when a site determines with high probability that a visit to the site comes from the same person as another visit to a different site. In the usual case that the sites are different contexts, cross-site recognition is a privacy harm in the same cases as cross-context recognition.
Same-site recognition is when a single site discovers and uses the fact that two or more visits probably came from the same person.
A privacy harm occurs if a person reasonably expects that they'll be using a different identity for different visits to a single site, but the site recognizes them anyway. This harm can be accomplished through a variety of means detailed in 2.1.3 Recognition Methods.
Note that these categories overlap: cross-site recognition is usually cross-context recognition (and always recognizes across partitions); and same-site recognition is sometimes cross-context recognition (and may or may not involve multiple partitions).
A partition is the user agent's attempt to match how its user would understand a context. User agents don't have a perfect understanding of how their users experience the sites they visit, so they often need to approximate the boundaries between contexts when building partitions. In the absence of better information, a partition can be defined as:
iframes, workers, and top-level pages)
When a user agent knows that a site includes multiple contexts, it should adjust its partitions accordingly, for instance by partitioning identities per subdomain or site path. User agents should work to improve their ability to distinguish contexts within a site.
Where possible, user agents should prevent people from being recognized across partitions unless they intend to be recognized. Note that:
If a user agent can tell that its user is using a particular identity on a website, for example because the user used an API like Credential Management Level 1 to log into the site, it should make that active identity clear to the user.
The web platform offers many ways for a website to recognize that a person
is using the same identity over time, including cookies,
CacheStorage, and other forms of storage. This allows sites to save the
person's preferences, shopping carts, etc., and people have come to expect
this behavior in some contexts.
People are unlikely to expect the recognition and will find it difficult to mitigate when it is automated, which can happen in different ways:
In addition to recognition methods that can operate automatically across contexts, recognition can also be made persistent such that it will defeat potential mitigations like partitions or clearing one's cookies. This constitutes unsanctioned tracking ([UNSANCTIONED-TRACKING]) and can take multiple forms.
Fingerprinting consists of using attributes of the person's browser and platform that are consistent between two or more visits and probably unique to the person.
The attributes can be exposed as information about the person's device that is otherwise benign (as opposed to 2.4 Sensitive Information). For example:
Preventing fingerprinting can be particularly challenging in cases that only affect a small group of people who use the web. For example, people who configure their systems in unique ways, such as by using a browser with a very small number of users. As long as a tracker can't track a significant number of people, it's likely to be unviable to maintain the tracker. However, this doesn't excuse making small groups of people trackable when those people didn't choose to be in the group.
See [fingerprinting-guidance] for how to mitigate threats that result from fingerprinting.
Supercookies occur when a user agent stores data for a site but makes that data more difficult to clear than other cookies or storage, typically because of a bug, of features relating to cache storage and network state (eg. ETag, HSTS), or because the browser restores the browser vendor's cookies when local state is cleared. Fingerprinting Guidance § Clearing all local state discusses how specifications can help user agents avoid this mistake.
Header enrichment happens when a network operator adds HTTP request headers to identify their customers to sites that they visit. It is unfortunately difficult for a user agent to mitigate against header enrichment.
Cross-device communication is communication between code on one device and code running on another device. For example, sounds or light emitted from one device could be detected by a microphone or light sensor on another device [SILVERPUSH]. Cross-device communication enables cross-device tracking, a form of cross-context recognition, but it can also be used for other inappropriate information flows.
Data minimization limits the risks of data being disclosed or misused, and it also helps user agents more meaningfully explain the decisions their users need to make.
Because personal data may be sensitive in unexpected ways, or have risks of future uses that could be unexpected or harmful, minimization as a principle applies to personal data that is not currently known to be identifying, sensitive, or otherwise potentially harmful.
Note that this principle was further explored in an earlier TAG draft on Data Minimization in Web APIs.
Websites sometimes use data in ways that aren't needed for the user's immediate goals. These uses are known as ancillary uses, and data that is primarily useful for ancillary uses is ancillary data.
Different users will want to share different kinds and amounts of ancillary data with websites, including possibly no ancillary data.
Aggregation or de-identification of data may make users interested in sharing ancillary data in cases where the user was otherwise not interested. These techniques may be especially useful and important when ancillary data contributes to a collective benefit in a way that reduces privacy threats to individuals (see collective privacy).
User agents should aggressively minimize ancillary data and should avoid burdening the user with additional privacy labor when deciding what ancillary data to expose. To that end, user agents may employ user research, solicitation of general preferences, and heuristics about sensitivity of data or trust in a particular context. To help sites understand user preferences, user agents can provide browser-configurable signals to directly communicate common user preferences (such as a global opt-out).
Data exposed for ancillary uses including telemetry and analytics may often reveal characteristics of user configuration, device, environment, or behavior that could be used as part of browser fingerprinting to identify users across sites. Revealing user preferences or other heuristics in providing or disabling functionality could also contribute to a browser fingerprint.
The many APIs available to websites expose lots of data that can be combined into information about people, web servers, and other things. We can divide that information into three categories:
Information that's fine to expose, for example because a person or group with sufficient authority intended to expose that information or to do something that necessarily exposes the information, or because it's not about people at all. For example:
Information that we don't want to expose and have a plausible plan for removing access to. For example, browsers are gradually removing the ability to join identities between different partitions.
Information that we'd rather not expose, but that we don't have a plausible plan for removing access to. For example:
These principles don't describe exactly how to distinguish acceptable information from information we'd rather not expose. API designers instead need to balance the harm to users from exposing information against the harm to users from blocking that exposure. When in doubt, designers should ensure that different user agents can help their users balance the costs in different ways.
The following subsections discuss how to review an API proposal that exposes data that provides a new way to infer each of the above categories of information. They explain how to leave the web better than you found it.
Information that would be acceptable to expose under one set of access guards might be unacceptable under another set, so when an API designer intends to explain that their new API is acceptable because an existing acceptable API already exposes the same information, they must be careful to ensure that their new API is only available under a set of guards that's at least as strict. Without those guards, they need to make the argument from scratch, without relying on the existing API.
If future web platform changes make it possible to remove other access to the undesirable information, it should be clear how to extend those changes to the proposed API.
If an existing browser does block access to the undesirable information, perhaps by breaking some experiences on the Web that other browsers don't wish to break, it should be clear how the more-private browser can also prevent the new API from exposing that information without breaking additional sites or user experiences.
When a developer is trying to access the undesirable information, a new API should be at least as difficult to use as the existing APIs. For example, it shouldn't require less code, less maintenance, or less runtime cost.
The third consideration can be surprising. In many other cases, we can think in terms of a threat model and use designs familiar from security to make information either available or unavailable. In this third case, however, we have to think more economically and consider the cost to a website of inferring the relevant information from whatever data the web's APIs expose. If the cost of inferring the undesirable information is high, fewer websites will gather it, and privacy will be generally better. If a new API makes the cost go down, more websites will start inferring the information, and overall privacy will worsen.
Usually, acceptable APIs in this category will be designed to expose data that makes some
acceptable information easier to discover. For example, they might reveal a performance
metric for a website directly instead of requiring it to be computed from the timing of
onload events. The challenge for the new API's designer is to
ensure that the data it exposes doesn't make it cheaper to compute information about
people than it would have been through other methods.
Contributes to correlation, identification, secondary use, and disclosure.
Many pieces of information about someone could cause privacy harms if disclosed. For example:
A particular piece of information may have different sensitivity for different people. Language preferences, for example, might typically seem innocent, but also can be an indicator of belonging to an ethnic minority. Precise location information can be extremely sensitive (because it's identifying, because it allows for in-person intrusions, because it can reveal detailed information about a person's life) but it might also be public and not sensitive at all, or it might be low-enough granularity that it is much less sensitive for many people.
When considering whether a class of information is likely to be sensitive to a person, consider at least these factors:
While data rights alone are not sufficient to satisfy all privacy principles for the Web, they do support self-determination and help improve accountability. Such rights include:
This right includes both being able to review what information has been collected or inferred about oneself and being able to discover what actors have collected information about oneself. As a result, databases cannot be kept secret and data collected about people needs to be meaningfully discoverable by those people.
The right to erase applies whether or not terminating use of a service altogether, though what data can be erased may differ between those two cases. On the Web, people may wish to erase data on their device, on a server, or both, and the distinctions may not always be clear.
Portability is needed to realize the ability for people to make choices about services with different data practices. Standards for interoperability are essential for effective re-use.
The right to correct data about oneself, to ensure that one's identity is properly reflected in a system.
The right to be free from automated decision-making based on data about oneself.
For some kinds of decision-making with substantial consequences, there is a privacy interest in being able to exclude oneself from automated profiling. For example, some services may alter the price of products (price discrimination) or offers for credit or insurance based on data collected about a person. Those alterations may be consequential (financially, say) and objectionable to people who believe those decisions based on data about them are inaccurate or unjust. As another example, some services may draw inferences about a user's identity, humanity, or presence based on facial recognition algorithms run on camera data. Because facial recognition algorithms and training sets are fallible and may exhibit certain biases, people may not wish to submit to decisions based on that kind of automated recognition.
People may change their decisions about consent or may object to subsequent uses of data about themselves. Retaining rights requires ongoing control, not just at the time of collection.
The OECD Privacy Principles [OECD-Guidelines], [Records-Computers-Rights], and the [GDPR], among other places, include many of the rights people have as data subjects. These participatory rights by people over data about themselves are inherent to autonomy.
Data is de-identified when there exists a high level of confidence that no person described by the data can be identified, directly or indirectly (e.g. via association with an identifier, user agent, or device), by that data alone or in combination with other available information. Note that further considerations relating to groups are covered in the Collective Issues in Privacy section.
We talk of controlled de-identified data when:
Different situations involving controlled de-identified data will require different controls. For instance, if the controlled de-identified data is only being processed by one actor, typical controls include making sure that the identifiers used in the data are unique to that dataset, that any person (e.g. an employee of the actor) with access to the data is barred (e.g. based on legal terms) from sharing the data further, and that technical measures exist to prevent re-identification or the joining of different data sets involving this data, notably against timing or k-anonymity attacks.
In general, the goal is to ensure that controlled de-identified data is used in a manner that provides a viable degree of oversight and accountability such that technical and procedural means to guarantee the maintenance of pseudonymity are preserved.
This is more difficult when the controlled de-identified data is shared between several actors. In such cases, good examples of typical controls that are representative of best practices would include making sure that:
the identifiers used in the data are under the direct and exclusive control of the first party (the actor a person is directly interacting with) who is prevented by strict controls from matching the identifiers with the data;
when these identifiers are shared with a third party, they are made unique to that third party such that if they are shared with more than one third party these cannot then match them up with one another;
there is a strong level of confidence that no third party can match the data with any data other than that obtained through interactions with the first party;
any third party receiving such data is barred (eg. based on legal terms) from sharing it further;
technical measures exist to prevent re-identification or the joining of different data sets involving this data, notably against timing or k-anonymity attacks; and
there exist contractual terms between the first party and third party describing the limited purpose for which the data is being shared.
Note that controlled de-identified data, on its own, is not sufficient to make data processing appropriate.
Privacy principles are often defined in terms of extending rights to individuals. However, there are cases in which deciding which principles apply is best done collectively, on behalf of a group.
One such case, which has become increasingly common with widespread profiling, is that of information relating to membership of a group or to a group's behaviour, as detailed in 1.2.1 Group Privacy. As Brent Mittelstadt explains, “Algorithmically grouped individuals have a collective interest in the creation of information about the group, and actions taken on its behalf.” ([Individual-Group-Privacy]) This justifies ensuring that grouped people can benefit from both individual and collective means to support their autonomy with respect to data processing. It should be noted that processing can be unjust even if individuals remain anonymous, not from the violation of individual autonomy but because it violates ideals of social equality ([Relational-Governance]).
Another case in which collective decision-making is preferable is for processing for which informed individual decision-making is unrealistic (due to the complexity of the processing, the volume or frequency of processing, or both). Expecting laypeople (or even experts) to make informed decisions relating to complex data processing or to make decisions on a very frequent basis even if the processing is relatively simple, is unrealistic if we also want them to have reasonable levels of autonomy in making these decisions.
The purpose of this principle is to require that data governance provide ways to distinguish appropriate data processing without relying on individual decisions whenever the latter are impossible, which is often ([Relational-Governance], [Relational-Turn]).
Which forms of collective governance are recognised as legitimate will depend on domains. These may take many forms, such as governmental bodies at various administrative levels, standards organisations, worker bargaining units, or civil society fora.
It must be noted that, even though collective decision-making can be better than offloading privacy labour to individuals, it is not necessarily a panacea. When considering such collective arrangements it is important to keep in mind the principles that are likely to support viable and effective institutions at any level of complexity ([IAD]).
A good example of a failure in collective privacy decisions was the standardisation of the
ping attribute. Search engines, social sites, and other algorithmic media in the same vein
have an interest in knowing which sites that they link to people choose to visit (which in turn
could improve the service for everyone). But people may have an interest in keeping that
information private from algorithmic media companies (as do the sites being linked to, as that
facilitates timing attacks to recognise people there). A person's exit through a specific
and difficult for user agents to defend against. The value proposition of the
in this context is therefore straightforward: by providing declarative support for this
functionality it can be made fast (the browser sends an asynchronous notification to a ping
endpoint after the person exits through a link) and the user agent can provide its user with
the option to opt out of such tracking — or disable it by default.
Unfortunately, this arrangement proved to be unworkable on the privacy side (the performance gains,
however, are real). What prevents a site from using
ping for people who have it activated
and bounce tracking for others? What prevents a browsers from opting everyone out because it wishes
to offer better protection by default? Given the contested nature of the
ping attribute and
the absence of a forcing function to support collective enforcement, the scheme failed to deliver
Computing devices have owners, and those owners have administrator access to the devices in order to install and configure the programs, including user agents, that run on them. Sometimes, as in the cases of an employer providing a device to an employee, a friend loaning a device to their visitor, or a parent providing a device to their small child, the person using a device doesn't own the device or have administrator access to it. Other times, as in the cases of intimate partners or one relative helping another relative with their device, the owner and primary user of a device might not be the only person with administrator access. As a program running on a device, a user agent generally can't tell whether the administrator who has installed and configured it was authorized by the device's actual owner.
These relationships can involve power imbalances. A child may have difficulty accessing any computing devices other than the ones their parent provides. A victim of abuse might not be able to prevent their partner from having administrator access to their devices. An employee might have to agree to use their employer's devices in order to keep their job.
While a device owner has an interest and sometimes a responsibility to make sure their device is used in the ways they intended, the person using the device still has a right to privacy while using it. The above principles enforce this right to privacy in two ways:
Some administrator requests might be reasonable for some sorts of users, like employees or especially children, but not be reasonable for other sorts, like friends or intimate partners. In those cases, the user agent can explain what the administrator is going to learn in a way that also says what sort of user is expected to agree. Users in other classes can then react appropriately.
Online harassment is the "pervasive or severe targeting of an individual or group online through harmful behavior" [PEN-Harassment]. Harassment is a prevalent problem on the Web, particularly via social media. While harassment may affect any person using the Web, it may be more severe and its consequences more impactful for LGBTQ people, women, people in racial or ethnic minorities, people with disabilities, vulnerable people and other marginalized groups.
Harassment is both a violation of privacy itself and can be magnified or facilitated by other violations of privacy.
Abusive online behavior may include: sending unwanted information; directing others to contact or bother a person ("dogpiling"); disclosing sensitive information about a person; posting false information about a person; impersonating a person; insults; threats; and hateful or demeaning speech.
Disclosure of identifying or contact information (including "doxxing") can be used, including by additional attackers, to send often persistent unwanted information that amounts to harassment. Disclosure of location information can be used, including by additional attackers, to intrude on a person's physical safety or space.
Mitigations for harassment include but extend beyond mitigations for unwanted information and other privacy principles. Harassment can include harmful activity with a wider distribution than just the target of harassment.
Reporting mechanisms are mitigations, but may not prevent harassment, particularly in cases where hosts or intermediaries are supportive of or complicit in the abuse.
Effective reporting is likely to require:
Receiving unsolicited information that either may cause distress or waste the recipient's time or resources is a violation of privacy.
Unwanted information covers a broad range of unsolicited communication, from messages that are typically harmless individually but that become a nuisance in aggregate (spam) to the sending of images that will cause shock or disgust due to their graphic, violent, or explicit nature (eg. pictures of one's genitals). While it is impossible, in a communication system involving many people, to offer perfect protection against all kinds of unwanted information, steps can be taken to make the sending of such messages more difficult or more costly, and to make the senders more accountable. Examples of mitigations include:
This section is still being refined. We expect additional principles to be added.
An individual may not realise when they disclose personal data that they are vulnerable or could become vulnerable. Some individuals may be more vulnerable to privacy risks or harm as a result of collection, misuse, loss or theft of personal data because of their attributes, interests, opinions or behaviour. Others may be vulnerable because of the situation or setting (e.g., where there is information asymmetry or other power imbalances), or they lack the capacity to fully assess the risks, or because choices are not presented in an easy-to-understand meaningful way (e.g., deceptive patterns). Yet others may be vulnerable because they have not been consulted about their privacy needs and expectations, or considered in the decisions about the design of the product of service.
Sometimes communities of individuals are classed as “vulnerable”, typically children and the elderly, but anyone could become privacy vulnerable in a given context. Additional privacy protections may be needed for personal data of vulnerable individuals or sensitive information which could cause someone to become vulnerable if their personal data is collected, used or shared.
Even in populations of individuals classed as “vulnerable” (such as children), each individual is unique with their own desires and expectations for privacy. While sometimes others can help vulnerable individuals assess privacy risks and make decisions about privacy (such as parents, guardians and peers), everyone has their own right to privacy.
Some classes of vulnerable people tend to be unable to make good decisions about their own web use, and need a guardian to help them. Children are a widely recognized example of this class, with their parents often acting as their guardians. A person with a guardian is known as a ward.
Many legal systems treat these guardianship relationships as a set of rights that the guardian possesses. We prefer to instead think of the ward having a right to make informed decisions and exercise their autonomy. Their guardian then has an obligation to help their ward do so when the ward's abilities aren't sufficient, even if that conflicts with the guardian's desires. In practice, many wards discover that their guardian is not making decisions in the ward's best interest, and it's critical that such wards have a way to escape their misbehaving guardian.
Historically, the Web has provided exactly this escape route, and user agents should preserve that feature by correctly balancing a benevolent guardian's need to protect their ward from dangers against other wards' need to protect themselves from their misbehaving guardians.
Attempts to obtain consent to processing that is not in accordance with the person's true preferences result in imposing unwanted privacy labour on the person, and may result in people erroneously giving consent that they regret later.
Examples of alternatives to interrupting users with consent requests include:
Considering the information sharing norms in the site's audience and category, and requesting only consent that is appropriate to the purpose of the site. (For example, a photo sharing site's users might expect to be prompted for consent to share their uploaded work.) Sites should consider conducting user research on people's expectations for how data is processed.
Relying on a global opt-out signal from the user agent.
Delaying a prompt for consent until a user does something that puts the request in context, which will also help them give an informed response.
Notifications and other interruptive UI can be a powerful way to capture attention. Depending on the operating system in use, a notification can appear outside of the browser context (for example, in a general notifications tray) or even cause a device to buzz or play an alert tone. Like all powerful features, notifications can be misused and can become an annoyance or even used to manipulate behaviour and thus reduce autonomy.
User agents should provide UI that allows their users to audit which web sites have been granted permission to display alerts and to revoke these permissions. User agents should also apply some quality metric to the initial request for permissions to receive notifications (for example, disallowing sites from requesting permission on first visit).
Web sites should tell their users what specific kind of information people can expect to receive, and how notifications can be turned off, when requesting permission to send interruptive notifications. Web sites should not request permission to send notifications when the user is unlikely to have sufficient knowledge (e.g. information about what kinds of notifications they are signing up for) to make an informed response. If it's unlikely that such information could have been provided then the user agent should apply mitigations (for example, warning about potential malicious use of the notifications API). Permissions should be requested in context.
Whenever people have the ability to cause an actor to process less of their data or to stop carrying out some given set of data processing that is not essential to the service, they must be allowed to do so without the actor retaliating, for instance by artificially removing an unrelated feature, by decreasing the quality of the service, or by trying to cajole, badger, or trick the person into opting back into the processing.
Actors can invest time and energy into automating ways of gathering data from people and can design their products in ways that make it a lot easier for people to disclose information than not, whereas people typically have to manually wade through options, repeated prompts, and deceptive patterns. In many cases, the absence of data — when a person refuses to provide some information — can also be identifying or revealing. Additionally, APIs can be defined or implemented in rigid ways that can prevent people from accessing useful functionality. For example, I might want to look for restaurants in a city I will be visiting this weekend, but if my geolocation is forcefully set to match my GPS, a restaurant-finding site might only allow searches in my current location. In other cases, sites do not abide by data minimisation principles and request more information than they require. This principle supports people in minimising their own data.
User agents should make it simple for people to present the identity they wish to and to provide information about themselves or their devices in ways that they control. This helps people to live in obscurity ([Lost-In-Crowd], [Obscurity-By-Design]), including by obfuscating information about themselves ([Obfuscation]).
Instead, the API could indicate a person's preference, a person's chosen identity, a person's query or interest, or a person's selected communication style.
For example, a user agent might support this principle by:
Sites should include deception in their threat modeling and not assume that Web platform APIs provide any guarantees of consistency, currency, or correctness about the user. People often have control of the devices and software they use to interact with web sites. In response to site requests, people may arbitrarily modify or select the information they provide for a variety of reasons, including both malice and self-protection.
In any rare instances when an API must be defined as returning true current values, users may still configure their agents to respond with other information, for reasons including testing, auditing or mitigating forms of data collection, including browser fingerprinting.
A person (also user or data subject) is any natural person. Throughout this document, we primarily use person or people to refer to human beings, as a reminder of their humanity. When we use the term user, it is to talk about the specific person who happens to be using a given system at that time.
A vulnerable person is a person who may be unable to exercise sufficient self-determination in a context. Amongst other things, they should be treated with greater default privacy protections and may be considered unable to consent to various interactions with a system. People can be vulnerable for different reasons, for example because they are children, are employees with respect to their employers, are facing a steep asymmetry of power, are people in some situations of intellectual or psychological impairment, are refugees, etc.
A context is a physical or digital environment in which people interact with other actors, and which the people understand as distinct from other contexts.
An actor is an entity that a person can reasonably understand as a single "thing" they're interacting with. Actors can be people or collective entities like companies, associations, or governmental bodies. Uses of this document in a particular domain are expected to describe how the core concepts of that domain combine into a user-comprehensible actor, and those refined definitions are likely to differ between domains.
User agents tend to explain to people which origin or site provided the web page they're looking at. The actor that controls this origin or site is known as the web page's first party. When a person interacts with a UI element on a web page, the first party of that interaction is usually the web page's first party. However, if a different actor controls how data collected with the UI element is used, and a reasonable person with a realistic cognitive budget would realize that this other actor has this control, this other actor is the first party for the interaction instead.
The first party to an interaction is accountable for the processing of data produced by that interaction, even if another actor does the processing.
A third party is any actor other than the person visiting the website or the first parties they expect to be interacting with.
The Vegas Rule is a simple implementation of privacy in which "what happens with the first party stays with the first party." Put differently, the Vegas Rule is followed when the first party is the only data controller. While the Vegas Rule is a good guideline, it's neither necessary nor sufficient for appropriate data processing. A first party that maintains exclusive access to a person's data can still process it inappropriately, and there are cases where a third party can learn information about a person but still treat it appropriately.
We define personal data as any information that is directly or indirectly related to an identified or identifiable person, such as by reference to an identifier ([GDPR], [OECD-Guidelines], [Convention-108]).
On the web, an identifier of some type is typically assigned for an identity as seen by a website, which makes it easier for an automated system to store data about that person.
Examples of identifiers for a person can be:
If a person could reasonably be identified or re-identified through the combination of data with other data, then that data is personal data.
Privacy is achieved in a given context that either involves personal data or involves information being presented to people when the principles of that context are followed appropriately. When the principles for that context are not followed, there is a privacy violation. Similarly, we say that a particular interaction is appropriate when the principles are adhered to) or inappropriate otherwise.
An actor processes data if it carries out operations on personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, sharing, dissemination or otherwise making available, selling, alignment or combination, restriction, erasure or destruction.
An actor shares data if it provides it to any other actor. Note that, under this definition, an actor that provides data to its own service providers is not sharing it.
An actor sells data when it shares it in exchange for consideration, monetary or otherwise.
The purpose of a given processing of data is an anticipated, intended, or planned outcome of this processing which is achieved or aimed for within a given context. A purpose, when described, should be specific enough to be actionable by someone familiar with the relevant context (ie. they could independently determine means that reasonably correspond to an implementation of the purpose).
The means are the general method of data processing through which a given purpose is implemented, in a given context, considered at a relatively abstract level and not necessarily all the way down to implementation details. Example: a person will have their preferences restored (purpose) by looking up their identifier in a preferences store (means).
A data controller is an actor that determines the means and purposes of data processing. Any actor that is not a service provider is a data controller.
A service provider or data processor is considered to be in the same category of first party or third party as the actor contracting it to perform the relevant processing if it:
User agents should attempt to defend the people using them from a variety of high-level threats or attacker goals, described in this section.
These threats are an extension of the ones discussed by [RFC6973].
These threats combine into the particular concrete threats we want web specifications to defend against, described in the sections that follow.
Some of the definitions in this document build on top of the work in Tracking Preference Expression (DNT).
The following people, in alphabetical order of their first name, were instrumental in producing this document: Amy Guy, Chris Needham, Christine Runnegar, Dan Appelquist, Don Marti, Jonathan Kingston, Nick Doty, Peter Snyder, Sam Weiler, Tess O'Connor, and Wendy Seltzer.