Earlier this month, Google announced that they would be delaying the deprecation of support for third-party cookies in their Chrome browser until the second half of 2023. While marketers and advertisers across the globe breathed a collective sigh of relief, it’s important to remember that this is not a permanent reprieve—cookies are still on death row … their execution date has just been delayed a bit.
A primary reason for this postponement of the cookiepocolypse is the relative nacency of many proposals to Chrome’s Privacy Sandbox. One of Chrome’s core stances when they announced the third-party cookie deprecation was that they would deprecate support once the industry was able to successfully accomplish core marketing and advertising use cases in privacy-safe ways. Hence the Privacy Sandbox initiative.
At this stage, you have certainly heard about FLoC, FLEDGE, and the various conversion APIs for measurement in the works. Of those core initiatives, none are fully supported in production at the time of writing. In addition to these core proposals, there are a number of what I like to refer to as “secondary proposals” that are critical to ensuring a privacy-safe environment in a post-cookie world. Here we will explore these secondary proposals, their implications, and status.
Chrome has taken the stance from the beginning that they are against “blunt” approaches to the blocking of third-party cookies. Their position is that these cookies are used across the web for a number of use cases to support the free and open ad supported web. By blocking access to third-party cookies without offering any kind of other alternatives or additional protections, vendors will resort to more nefarious means of tracking, specifically browser fingerprinting.
Browser fingerprinting is an approach where a platform will collect a number of disparate, static data points from the user. These data points are things like IP address, device type, operating system, screen resolution, location, etc. While each of these items in isolation is not a unique identifier, by collecting a number of these metrics and layering them on top of one another, a unique anonymous “fingerprint” of the user is able to be created. This fingerprint is then translated into a unique ID which the user’s activity is associated with to create a profile and target the user across the web.
When compared to the standard use of third-party cookies to accomplish this cross-site tracking, the method of fingerprinting is more difficult to detect, is less transparent, and users lack any kind of control. In many ways it is more problematic than cookies.
The privacy budget proposal in the Privacy Sandbox is meant to protect against these methods of browser fingerprinting. The budget is a technical method to limit the amount of device and browser data exposed when a user visits a site. By limiting the number of data points available, it can ensure the data that is available is insufficient to identify and track an individual user.
Chrome is taking a multi-stage approach to solving this problem. First, starting by measuring what information is exposed by each fingerprint surface (a fingerprinting surface is an interaction point where a website can learn something that is either stable or semi-stable for a given user or device and varies between users or devices). Once taking this measurement, they will then determine how much and what information is exposed to each site. Finally, they will be defining and enforcing the “privacy budget” for how much information can be accessed. At this stage there are still various approaches for enforcement that are being discussed.
This proposal is critical to be live for the deprecation of third-party cookies in Chrome to come to fruition. It is also implicated in a number of the core Sandbox proposals. For example, one of the main criticisms from the privacy community for the FLoC proposal was that the cohort made available in the browser represented another fingerprint surface and would make profiling and identification of individual users less difficult. A solution such as the privacy budget would work in tandem with a solution such as FLoC to make it far more privacy-safe and thus more acceptable for adoption.
Today, many solutions to combat ad fraud use approaches similar to device fingerprinting to identify users in a cross-site context and determine their legitimacy. With other Sandbox proposals such as the privacy budget intended to limit fingerprinting, this traditional approach will not be possible. In response, there is a proposal for a new API and token approach to distinguish between a real user and a fraudulent bot.
To not wade into the technical specifics (which are still very much in the design phase at the time of writing), this proposal puts forth a method for “trust tokens” to be issued and stored in a browser in one context (example: a social media site that has authenticated a user) and be “redeemed” or accessed in another context (say by an ad on a publisher site). The presence of a “trust token” and its redemption would ensure it is a real user and not fraudulent bot traffic accessing the site (which does not have a reason to trust the current browser)—all of this being done without identifying the user or linking the two identities.
Another approach proposed to limit the ability for technologies to conduct browser fingerprinting is the concept of willful IP blindness. This would allow a resource loading on a site to be blinded to the user’s IP address. The proposal is meant to 1) provide one less fingerprinting surface and 2) allow for something that is typically always passively exposed (IP address) to be voluntarily blinded thus not eating into the privacy budget consumed by the site in question.
In today’s web, third-party cookies are the mechanism used to pass information (such as a user’s ID) across domains. In some cases, a single entity will be the owner of multiple domains. (Think a multi-brand organization with multiple brand sites, or a single brand with a global presence and sites with a .com and .co.uk and .de indicator.) These sites would be considered “first party” in the real-world context but are not in the digital internet context. In order to better align the web’s definition of first and third party with the definition in the real world, the solution of First Party Sets (FPS) is introduced.
Specifically, FPS would allow for multiple domains that belong to the same first-party entity to be classified as belonging to the same first party for purposes of cookie and data access from a user’s browser. In the proposal, sites are declared as being owned by the same entity and verified as such by a verification mechanism. Once the sites are verified being owned by the same first-party entity, cookies could be accessed across each of the stated owned domains.
An initial origin trial for this proposal has been completed but it is still in development. Once live, this will help many organizations as it relates to first-party data collection in this new cookieless environment.
It’s important to remember that the Privacy Sandbox is not just about providing new methods for targeting users and measuring campaign performance; there are a number of additional proposals meant to better protect user privacy in the new privacy-focused environment. These proposals are critical not just to the success of the new targeting methods, but also to the success of a free and open web.