CODE
CODE
Hubspot Custom Styles

Optable Blog

Learn about the modern advertising landscape and how Optable's solutions can help your business.

Showing 11 results

The crisp February air of Toronto welcomed a select group of media & advertising thought leaders to Optable's exclusive summit. The agenda promised deep dives into data strategy, privacy's impact, and navigating the ever-evolving media landscape. And it definitely delivered.

Data Collaboration Takes Center Stage

The opening panel, "How Publishers & Advertisers Are Using Data to Build Better Ad Campaigns in the Age of Privacy," kicked things off with a bang. Data collaboration emerged as the undeniable hero, bridging the gap in a fragmented ecosystem. Panelists from La Presse, The Globe & Mail, and Advance powered by Loblaw discussed their shared journey: adapting data strategies, wielding identity solutions, all while dancing around the ever-changing privacy regulations. The panel was moderating by Optable's own Ioana Tirtirau, Head of Customer Success, who helped the crowd to glean actionable insights that could be implemented within their own businesses.

One key takeaway? It's not just about the tech. "The future of advertising lies in finding the sweet spot where data insights combine to create a better experience for the audience and ultimately create business growth. Data is the interface with which were able to create better advertising partnerships." said one publisher exec. The audience couldn't have agreed more, recognizing the need for meaningful campaigns that respect customer privacy and provide real insights into customers’ wants and needs.

Privacy: The Driving Force (and Opportunity)

Deloitte's fireside chat shifted gears, focusing on the elephant in the room – privacy. Experts dissected the seismic shifts caused by regulations and platform moves, highlighting not just the challenges but also the opportunities. "CCPA, GDPR, Law 25, cookie deprecation – it's all about building trust," emphasized a Deloitte speaker. "And trust generates loyalty & engagement, which is the real gold in this game."

Beyond Trends: The Human-Centric Shift

The summit wasn't just about buzzwords and tech. It was about understanding that data and privacy are inherently human-centric. At its core, advertising is about connecting with people, and in the privacy age, that means that collaboration is key.

The cocktail hour wasn't just a networking opportunity; it was a testament to the energy and ideas bubbling up from the room. From Optable's own data experts to seasoned ad veterans, everyone recognized that the future isn't pre-programmed – it's in the hands of innovative minds who can harness data, respect privacy, and ultimately, rethink and rearchitect the media & advertising ecosystem to be more impactful for audiences and more sustainable for businesses.

Key Takeaways:

  • Data collaboration is growing rapidly, with the major cloud ecosystems acting as stewards.
  • Privacy regulations create challenges, but also unexpected opportunities.
  • Third party cookies are officially on their way out and will create a forcing function to re-think our ecosystem.

Optable's 'State of Data Collaboration' in Toronto wasn't just a glimpse into the future; it was a blueprint for navigating it. Armed with actionable insights and a renewed focus on the human element, data & advertising professionals left the venue empowered to redefine success in the privacy-first era.

In this blog, we will outline what audience activation is, why it is important for marketers, and how to activate audiences. We’ll discuss how publishers and advertisers can work together to connect data with technology like Google’s Ad Manager, Prebid.org, leading DSPs such as The Trade Desk and Amazon DSP, and major ad platforms like TikTok and Meta for activation purposes. 

What is audience activation and why is it important for marketing?

Audience activation, in the context of advertising, is the process of identifying and targeting a specific audience with relevant content and offers. It is important for marketers to use audience activation because it allows them to reach their target audience more effectively and efficiently. Audience activation can improve marketing campaigns by increasing brand awareness, driving leads, and generating sales.

Marketers today are faced with complex buyer journeys, exacerbated by the loss of third party data and cookie deprecation. To succeed, marketers must move away from the channel-first approach and create unified customer profiles with data from all available channels. With a centralized profile based on first party data, marketers can target audiences with just the right message, at the right time. 

By maximizing partnerships with media partners, marketers can look across consumer touchpoints to increase the effectiveness of activation. Technology can help organizations gain value from advertising partnerships while navigating challenges with data privacy. 

How to activate audiences with connected data 

Data is usually kept in a cloud environment like Snowflake, GCP, AWS or Databricks, and activation technology must be able to work with the data environment safely and with privacy in mind. 

Optable Collaborate is a Data Clean Room solution, fully interoperable across cloud environments. It utilizes leading privacy-enhancing technologies (PETs), and is purpose-built with advertising-specific frameworks and utilizes a simple pricing & activation model. 

Collaboration is enabled by tools like Optable’s DMP, which creates, segments and analyzes audience data before and after it is utilized within a clean room. Optable DMP provides an easy-to-use interface for commercial teams and plugs into a wide array of data sources, including real-time & event-level data, allowing you to scale & manage the importing, building, activation and measurement of audiences throughout all phases of advertising.

Once data is collated in a privacy-safe way, these are the steps to take to activation:

  • Create, segment, and analyze your audience data. This will help you understand who your target audience is and what they are interested in.
  • Deep dive into each audience segment. This will allow you to tailor your content and offers to each segment's specific needs.
  • Form partnerships with publishers and advertisers. This will give you access to a larger audience and allow you to reach them more effectively.
  • Connect your data with key advertising technologies. This will help you track the effectiveness of your campaigns and optimize your results.
  • Control the security and privacy of your data. This is essential to ensure that your data is used responsibly and ethically.
  • Analyze across audience and ad event data. This will help you understand how your campaigns are performing and make necessary adjustments.

Once audiences are activated effectively, with data-informed decisioning, marketing campaigns will become imminently more effective. To find out more, contact us for a demo

Differential Privacy has emerged as a powerful technique to protect individual privacy while still reaping the benefits of data-driven insights. In this blog, we’ll explore differential privacy, how it works, and how media companies can use it to safeguard sensitive data about consumers. 

What is Differential Privacy?

Differential privacy is a privacy-enhancing technology (PET) that allows organizations to analyze data while preserving the privacy of individual people. The core principle is to ensure that no specific piece of information about an individual can be inferred from the results of a query or analysis. This means that the results of the analysis will look nearly the same regardless of whether any single individual's information was included in the analysis or not.

How Does Differential Privacy Work?

Differential privacy is achieved by adding carefully curated random noise to the dataset at a high enough rate that it protects privacy, but not so high that it diminishes utility. This can be achieved in two ways:

  1. Randomized Responses: Data is intentionally perturbed or randomized to introduce uncertainty into the results. This means that the output of a query or analysis is not an exact representation of the raw data, but rather a noisy version. For example, when queried about individuals’ interest in sports, a differentially private system would randomly report either "yes" or their true reponse.
  2. Noisy Aggregates: Differential privacy is often used in situations where data is aggregated, and reported as noised group summaries. This ensures that no specific individual's information can be inferred. For example, if 353 individuals are interested in sports, a differentially private system would add random noise and report it as 347 or 360.

Any statistical analysis, whether using differential privacy or not, still leaks some information about the end users whose data are analyzed. As more and more analyses are performed on the same individuals or end users, this privacy loss can quickly accumulate. Fortunately, differential privacy provides formal methods for tracking and limiting this cumulative privacy loss.

How are Media Companies Using Differential Privacy?

Differential privacy facilitates secure data sharing among media organizations and marketers, promoting collaboration without compromising any individual’s privacy. This technology is particularly helpful when companies are trying to gather consumer insights from:  

  • Location-Based Services: Companies use differential privacy to aggregate and analyze location data from mobile devices without exposing the exact whereabouts of individual users.
  • Machine Learning: Differential privacy is used to train machine learning models on sensitive data while ensuring that the models do not memorize individual records.
  • Campaign Analytics: Social media platforms and publishers employ differential privacy to report performance insights from an ad campaign, analyze user behavior and identify  trends without compromising any individual user’s privacy. 

When brands and media companies use differential privacy as one of their PETs, it helps them comply with data privacy regulations as well as build trust with consumers by assuring them that their data is handled with care. 

As data continues to play an essential role in finding and retaining user interest, media companies must implement differential privacy to harness data-driven insights while respecting individual privacy rights. It is poised to be an integral part of data analytics and sharing in a privacy-conscious world.

The need to safeguard sensitive data and ensure the confidentiality of transactions has never been more critical. The Trusted Execution Environment (TEE) emerges as a pivotal technology in the demand for increased data privacy. In this blog, we will delve into the world of TEE, understand what it is, and explore its applications as a privacy-enhancing technology.

What is a Trusted Execution Environment?

TEE is a secure and isolated area within a computer or mobile device's central processing unit (CPU). It’s designed to execute code and processes in a highly protected environment, ensuring that sensitive data remains secure and isolated from all other software in the system. It achieves this level of security via special hardware that keeps data encrypted while in use in main memory. This ensures that any software or user even with full privilege only sees encrypted data at any point in time.

How Does TEE Work?

Using special hardware, TEEs encrypt all data that exits to the main memory. And decrypt back any data returning before processing, allowing the code and analytics to operate on plaintext data. This means that TEE can scale very well compared to other pure cryptographic secure computation approaches.

TEEs also offer a useful feature called remote attestation. This means remote clients can establish trust on the TEE by verifying the integrity of the code and data loaded in the TEE and establish a secure connection with it.

How Can Media Companies Benefit From TEEs?

TEEs are an attractive option for media companies who want to safely scale their data operations in a secure environment. TEEs offer the following benefits:

  • Tamper-Resistance: The hardware-based security of TEE provides tamper-resistant execution of code.
  • Secure Communication: Remote attestation provides a way to establish trust between TEEs and remote entities, enabling secure communication.
  • User Trust: TEE builds trust among users, assuring them that their data and transactions are protected.

Now, let’s look at a real-world example of data collaboration using a TEE. In our last blog post, we saw that one way to perform the secure matching in the IAB’s Open Private Join & Activation proposal is using an MPC protocol. Another way to perform this secure matching is using a TEE. With TEE, only one helper server is involved. First, the advertiser and the publisher establish the trust of the TEE via remote attestation. Then, they -each forward their encrypted PII data to the TEE server which decrypts them and performs the match on plaintext data.

TEEs come with their own privacy risks. They are vulnerable to side-channel attacks, such as memory access pattern attacks, which can be exploited to reveal information about the underlying data. Adding side-channel protections can help counter these attacks, but significantly increases the computational overhead. Fortunately, despite this TEEs scale very well.

In an industry facing ongoing scrutiny over data privacy concerns, TEEs are becoming a standard. This PET technology will continue to evolve and we expect to see it playing an increasingly vital role in data collaboration. 

Blog
Data Collaboration
Data Governance & Privacy
PETs

Securing Ad Tech: The Role of Secure Computation in Data Privacy

In an era where data is the new gold, ensuring its privacy and security has never been more critical. Secure computation, is a powerful branch of cryptography, allowing companies to perform computations on sensitive data without revealing the actual information being processed. In this blog, we’ll explore what secure computation is and how it’s used to protect consumer data.

What is Secure Computation?

Secure computation is a cryptographic technique that enables multiple parties to jointly compute a function over their individual inputs while keeping those inputs private. This is known as "encryption in use" because the underlying data remains encrypted while it is being processed on remote servers or in the cloud.

The primary goal of secure computation is to ensure the confidentiality, integrity, and privacy of data throughout the computation process. It accomplishes this without relying on a trusted third party, making it particularly valuable in scenarios where data sharing and privacy are paramount. This means that two or more parties can collaborate on data analysis or computations without exposing their sensitive data to one another.

How are Media Companies and Brands Using Secure Computation to Collaborate?

Secure computation is applied in a range of scenarios where privacy and data security are paramount. Naturally, secure computation is a great fit for data sharing and collaboration among publishers and advertisers.

Both publishers and advertisers can benefit from a type of secure computation called Private Set Intersection (PSI) protocol. It allows two or more parties to compute the intersection of their private datasets without revealing any information about the records not in the intersection. Optable, for instance, provides an open-source matching utility that allows partners of Optable customers to securely match their first-party data sets with them using a PSI protocol.

How does secure computation work?

Secure computation can be implemented in two main ways: 1) via pure cryptography (using Fully Homomorphic Encryption (FHE) and Secure Multi-Party Computation (MPC)) or 2) through secure hardware (using Trusted Execution Environments (TEEs).

Fully Homomorphic Encryption

FHE is an incredibly powerful tool for protecting data privacy in the digital age. It enables analytics to be performed on encrypted data without ever having to decrypt it. The ad tech industry can certainly benefit from full-scale analytics without the risk of exposing personally identifiable information (PII).

While FHE has the potential to revolutionize the advertising ecosystem, it is unfortunately quite computationally intensive and limited in its current capabilities. Therefore it is not yet ready for widespread adoption. There is ongoing research to make FHE more efficient and functional in the future.

Secure Multi-Party Computation

MPC is a form of secure computation that uses a cryptographic protocol to enable two or more businesses with private data to perform a joint computation while keeping their individual inputs private. Each entity only learns what can be inferred from the computation result.

Often, the secure computation part is outsourced to two helper servers. Before data leaves a user's device, it is encrypted to both helper servers, which decrypt it partially and perform computation on the partially encrypted data. Neither server is ever able to see the original user data.

MPC protocols provide a high level of security but come with a tradeoff. They require sophisticated cryptographic operations which incur higher computation and communication costs. This makes this technology tailored for specific tasks, which can get very expensive.

How Does Optable Use MPC?

In the past year, Optable has been a leading contributor to the IAB Tech Lab’s Open Private Join and Activation (OPJA) that enables interoperable privacy safe ad activation based on PII data. At the heart of OPJA is a secure match using a PSI protocol that allows advertisers and publishers to match their PII data. One of the ways to perform this match is using MPC — the respective clean room vendors act as the MPC helper servers, which jointly compute the overlap without ever learning the identifiers not in the overlap.

In an age where data privacy is a growing concern, secure computation emerges as a vital technology that plays an important role helping companies comply with data protection regulations while still fostering innovation and cooperation among business partners.

The digital world has brought unprecedented convenience and connectivity but also raised significant concerns about data privacy. As we share more of our lives online, the need for robust privacy-enhancing technologies has become paramount. On-device learning has emerged as a powerful tool to protect personal data while enabling advanced capabilities. In this blog, we will explore on-device learning, its role in enhancing privacy, and how it’s used.

What is On-Device Learning?

On-device learning, sometimes referred to as federated learning, is a machine learning approach that allows training models directly on a user’s device with data available on their device. Only updated model parameters are sent to a remote server or cloud. This means that a user’s smartphone, tablet, or other device can learn and adapt to their preferences without constantly sending their data to remote servers. This gives users more control over their data, protects their privacy, and reduces the need to send raw individual user data to external servers.

How does On-Device Learning Work?

On-device learning operates with the following four principles:

  1. Local Data Processing: Instead of sending your data to the cloud, on-device learning processes data directly on your device. This can include training machine learning models, recognizing patterns, or adapting to a user’s  preferences.
  2. Privacy-Preserving Algorithms: Privacy-preserving algorithms ensure that only the updated model parameters leave the device. The user’s personal data remains on their device and is never exposed to third parties. 
  3. Personalized User Experience: On-device learning allows a user’s device to provide a personalized user experience by understanding their preferences, habits, and requirements without compromising data privacy.
  4. Offline Functionality: Due to local data processing, on-device learning enables a user’s device to adapt to their preferences immediately even when it's not connected to the internet. This ensures that the user can benefit from personalized features when they’re offline as well.

How are Marketers Using On-device Learning? 

With on-device learning, online retailers can gain insights on consumers’ preferences and behaviors without tracking their individual preferences. The way this works is, each consumer’s device downloads the current model, improves it by learning from the data on their phone. The model updates from each of these devices are then collected, compiled, and are fed back into and improved on the central model. Thus, the marketers just learn the overall purchase pattern or behavior without ever learning individual consumer preferences or behaviors. 

Let’s look at a real-world example of a data collection sequence that uses on-device learning:

  1. A user’s web browser downloads a cross-sell prediction model from an advertising platform like Meta ads or Google Ads. 
  2. The user clicks an ad and makes a purchase. Let’s say they clicked an ad for a smartphone and subsequently bought a smartphone as well as a screen protector. 
  3. The model performs inferences from the purchase data without sending the data to the advertising platform server or cloud
  4. The model gathers such inferences across millions of devices and compiles them to improve the advertising platform's central model. 
  5. Over time, the model improves and can be used to find an increasingly specific audience for screen protectors.

On-device learning is not perfect from a privacy perspective. When model parameters leave users’ devices they still leak information about the underlying local training data. So, the risk of sensitive information being shared is only reduced and not completely eliminated.To prevent this, on-device learning is often combined with other PETs such as differential privacy and secure computation, which we will cover in different posts on our blog.

In today's data-driven world, concerns about privacy and data security have never been more critical. k-Anonymity is a privacy concept and technique that plays a pivotal role in safeguarding sensitive data. Let’s explore what k-anonymity is and how it‘s used to protect personal information.

What is k-Anonymity?

k-Anonymity is a privacy model designed to protect the identities of individuals when their data is being shared, published, or analyzed. It ensures that data cannot be linked to a specific person by making it indistinguishable from the data of at least 'k-1' other individuals. In simpler terms, k-anonymity obscures personal information within a crowd, making it impossible to identify a particular individual. 

The 'k' in k-anonymity represents the minimum number of similar individuals (or the “anonymity set”) within the dataset that an individual's data must blend with to guarantee their privacy. For example, if k is set to 5, the data must be indistinguishable from at least four other people's data.

How Does k-Anonymity Work?

To implement k-anonymity, data must be generalized to make it less identifiable, while ensuring that each data point is identical to a minimum of ‘k-1’ other entries. This is commonly done through two methods:

  1. Generalization: Data attributes are generalized to broader, less specific categories. For example, an individual's age may be generalized from their precise age to an age range, like 25-34.
  2. Suppression: Certain attributes may be entirely removed or suppressed if they are considered too revealing. For instance, exact dates of birth or home addresses may be suppressed to protect individual identities.

How are Marketers Using k-anonymity?

Online retailers use k-anonymity to protect customer data while analyzing purchase histories and preferences to enhance their services and recommendations. 

For example, individual users can be associated with data cohorts based on their interests on their mobile device. An advertiser can then target individuals in specific cohorts. This way, the advertiser does not learn any personally identifiable information (PII) and only learns that a specific individual belongs to certain cohorts. And as long as the cohorts are k-anonymous, they protect users from re-identification, especially for large values of k.

A drawback to using k-anonymity is that sometimes revealing just the cohort a user belongs to can leak sensitive information about a user. This is true, especially when the cohorts are based on sensitive topics such as race, religion, sexual orientation, etc. A simple solution to this problem is to use predefined and publicly visible cohort categories, such as in Google Topics.

In any case, cohorts can still be combined or correlated and used to re-identify users across multiple sites. That said, k-anonymity is often combined with other privacy protections to further reduce the probability of re-identification.

As people spend more and more time online, consumers have demanded more control over their digital privacy. They’ve become particularly uncomfortable with digital tracking technology like third-party cookies that enable marketers to gather information about their browsing behavior. But eliminating third-party cookies puts marketers in a tough spot. Their businesses have relied on cookies to find new customers for over two decades. 

Government agencies in the US and Europe have responded to consumer demands by enacting regulations that offer more protection and control to users over how their data is  collected and processed. And many of the web browsers have already phased out cookies. Google has been the last hold out and they’re expected to fully phase out cookies by the end of 2024. 

But simply eliminating cookies won’t solve the privacy protection problem for consumers. Digital footprints are always expanding and companies need to be more vigilant than ever about protecting their customers’ data. There’s an enormous opportunity to build an ad ecosystem that respects users' privacy more than ever.

Privacy Enhancing Technologies (PETs) have emerged as a crucial ally for safeguarding consumer data. This emerging technology uses advanced cryptographic and statistical techniques to protect consumer information while still allowing marketers to glean valuable insights.

What are PETs?

PETs are a set of tools and methods designed to help organizations maintain digital privacy. They provide a layer of defense against unwanted surveillance, data breaches, and unwarranted data collection by enhancing user control and safeguarding data during its lifecycle. PETs are instrumental in upholding privacy, security, and freedom in the digital realm.

There are several types of PETs being used throughout the digital advertising ecosystem:

  1. K-anonymity 
  2. On-Device Learning
  3. Secure Computation
  4. Trusted Execution Environment
  5. Differential Privacy

PETs will play a vital role in creating an advertising ecosystem that is primarily privacy focused. Optable is exploring the use of multiple types of PETs as we build a privacy-safe environment where clients can safely collaborate with their data partners. The following blog series will demystify the complex world of PETs and take a closer look at how advertisers are using them.

Blog
Data Collaboration
Interoperability

Data Collaboration and Interoperability

At Optable we view interoperability first and foremost through the lens of digital advertising’s critical systems. And when you consider the systems used for ad campaign planning, activation, and measurement, you quickly realize that these systems were all inherently interoperable for a long time thanks to widespread data sharing. With identity and data sharing on their way out for a variety of reasons, new ways of interoperating within each of these systems are required. Clean rooms are a way to achieve data interoperability in advertising, and that’s why we have invested significantly in this area.

But, the trouble with clean rooms is that both parties have to agree to use the same one in order to interoperate. The central idea with clean room technologies is that two or more parties come together around a neutral compute environment, enabling them to agree on operations to perform on their respective datasets, on the structure of their input datasets, on the outputs generated by the operations and, importantly, on who has access to the outputs. Additionally, various privacy enhancing technologies may be used to limit and constrain the outputs and the information pertaining to the underlying input datasets that is revealed.

So, what does true interoperability look like for data collaboration platforms, built from the ground up for digital advertising? Here are three important pillars:


Integration with leading DWH clean room service layers. A DWH clean room service layer is the set of primitives (APIs and interfaces) made available by leading DWHes (Google, AWS, Snowflake, etc), that enables joining of disparate organization datasets, and purpose limited computation. Optable streamlines this by automating the flow of minimized data to/from DWHes, and by federating code to these environments. The end result? A collaborator with audience data sitting in Snowflake can easily match their audience data to an Optable customer's first party data, all within Snowflake using Snowflake DCR primitives to enable trust, without the Optable customer lifting a finger. In this example the matching itself happens inside of Snowflake, but the same thing can be done with other DWH clean room service layers as well.

Compatibility with open, secure multi-party compute protocols like Private Set Intersection (PSI). What if your partner wants to match their audience data with you but they cannot move their data into a cloud based DWH? SMPC protocols such as PSI enable double blind matching on encrypted datasets, without requiring decryption of data throughout. Open-source implementations provide an independently verifiable, albeit purpose constrained clean room service layer. The end result? A collaborator with audience data sitting on premise can execute an encrypted match with an Optable customer using a free, open-source utility.

Built-in entity resolution, audience management and activation, with deep integration to all major cloud and data environments. In the real world, few organizations have all of their user data assets neatly connected in a single environment. Sure, they exist, but more often than not, organizations need to do quite a bit of work to gather, normalize, sanitize, and connect their user data so that they can effectively plan, activate, and measure using data collaboration systems. It’s therefore no wonder that when the IAB issued their State of Data report earlier this year, respondents cited time frames of months up to years to get up and running with clean room tech! Moreover, even when one company has got their user data together, their partners often require help with entity resolution. These are the reasons why Optable makes it easy to connect user data sitting in any cloud environment or system into a cohesive and unified user record view, out of the box, with no code required. Got part of your user data in your CRM? And another sitting in cloud storage? And another in your DWH? No problem.


At Optable, we believe that these pillars are the groundwork on top of which interoperability can happen, and we’re partnering with industry peers who share the same vision. Stay tuned for more exciting announcements on this front!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

It’s time to turn your
data into opportunity.

Request demo