Taye Shobajo, Author at The Gradient Group | Page 23 of 109


Advertisers aren’t feeling the direct blow of generative AI on traffic the way publishers are. But they see what’s coming. Tripadvisor, for one, is already adjusting its strategy as the foundation of search starts to shift. 

It’s not going full Dotdash Meredith – the publisher has openly braced for a future without Google traffic – but a recalibration is clearly underway. 

“Google’s AI mode in search is going to eat large chunks of search,” said Matthew Dacey, CMO at Tripadvisor. “It’s going to happen fast as they [Google] push it and more people subsequently adopt it.”

So far that change hasn’t landed with full force on the travel firm. According to Statisita, monthly visits hovered between 146 million and 169 million in early 2023. By February 2025, that number had dropped to around 120 million. 

Yes, some of that dip chimes with the rollout of AI Overviews – Google’s move to surface answers directly in search results. But the story behind Tripadvisor’s traffic shift is bigger than any one feature. In fact, it’s part of a longer arc: the way people find information is changing. Tripadvisor, like everyone else, is learning how to meet them there. 

For Dacey, that starts with repositioning Tripadvisor not just as a utility but as a daily habit – something people turn to regularly, whether they’re browsing, planning or already en route to a holiday. Less search, more morning ritual. 

Delivering on that vision comes down to three things: improving the app experience, refreshing the membership program and moving Tripadvisor higher up the funnel – turning the service into a starting point, not just a step along the way. 

If it works, the hope is that it will drive more logged-in behavior, particularly on the app.

“Right now we have over 100 million active member accounts but not many of them are using the app,” said Dacey. 

That’s where AI comes in. Tripadvisor is building features to anticipate what travelers need before they ask, using the details they share when they begin planning or booking a trip. With that context, the app can push personalized recommendations at just the right moment.

Or as Dacey put it:  “All of our push notifications that go out can be those questions that travellers might be asking and served to them using AI based on the user-generated content on our site. It’s contextual information for exactly what’s on someone’s mind.”

A new brand campaign is meant to help land that message. Launching this month, it will mark Tripadvisor’s 25th anniversary while cementing the brand as a direct planning destination – not just a Google link to the results. 

“The way I’d describe what we’re doing is two things: how do we try and be more direct with people and then on the other hand it’s about how we show up where people actually are,” he continued. 

One of those places is Perplexity. Tripadvisor’s partnership with the AI search startup, announced earlier this year, lets Perplexity tap into behavioral and preference data that traditional search engines typically can’t access. In return, Tripadvisor’s curated hotel lists appear within Perplexity’s summaries. 

Six months in, Dacey said the results are promising – though he didn’t share details. Tripadvisor has said the deal is bringing in more high-intent users, particularly those ready to book.

Eventually, this partnership will expand to include restaurants and experiences. The playbook: generate revenue, drive qualified traffic and carve out a presence in AI-powered discovery environments.

“What tends to work in these environments is longer queries so we’re in a good position at Tripadvisor because longer queries for specific use cases will always find their way to interesting content,” said Dacey. “The question for us then is how do we get credit for that and how we translate that influence into something tangible on our side.”

That optimism points to a larger dynamic. Brands like Tripadvisor may be better positioned than publishers in the AI era. For one, they’re not monetizing pageviews the same way. They can afford to lose some traffic without losing revenue. And unlike publishers, they can shift ad dollars – moving spend into performance or partnership to make up for what’s lost in search.

That’s not to say CMOs like Dacey aren’t concerned. They are. But they also have more options. 

“Of course, advertisers are going to feel second-order effects of traffic from search going down like CPCs going up but it’s not comparable to publishers who view that traffic as their lifeblood,” said Tim Hussain, co-founder of AI consulting firm Signal42. “A CMO can always reallocate that money into other channels and escape it – well to a point.”



Source link


“We knew things were going to be a lot more difficult when we moved to Somerset,” Ryan says. “However, we thought we’d done a thorough job in weighing up the pros and cons before we moved – career compromises, cost of living, cultural trade-offs. But what caught us off guard were the less obvious provisions and hidden infrastructures we hadn’t realised we relied on in London. It was the lack of creative recruitment agencies or Facebook groups for set designers to sell and share props. It was the reduced word-of-mouth opportunities, or the lack of apps, magazines and influencers keeping you up to date with every exhibition, opening or event. It was the accumulation of these smaller, more invisible gaps that felt the hardest to acclimate to. Only after we moved did we realise how essential those things had been to our creative practices.”

Setting up Makers’ Yard helped in this process a lot. “We didn’t know anyone when we moved to Frome, but the building quickly became a hub for local creatives. The conversations, connections and collaborations that have unfolded in the space have been personally and professionally significant for us. Without those impromptu introductions and serendipitous chats, the transition would have been much harder.” Makers’ Yard became a testament to the power of a concentrated community space – but not everyone can take on a Victorian warehouse renovation. Soon, Ryan saw fellow city-ditchers grapple with the same issues they had faced, whether it was about jobs, events or selling kit. That’s how Ryan and Emma spotted an opportunity for a digital version of what they had created with Makers’ Yard – an alternative type of social network, one that was hyperlocal to Frome and nearby towns, and dedicated to the creative industry there.

The result is M.Y Local Network, a localised social network for creatives. For a £25 annual subscription fee, members gain access to a closed Discord community for Frome and its surrounding area (currently at around 150 members, including organisations and individuals). From there, you’ll find specific groups serving different needs: a jobs board, a professional sell-and-swap, peer advice and recommendations, local events, workplace listings, a library of member-recommended digital tools and online resources (podcasts, funding resources, etc.), as well as a collective Google map.



Source link


Google long ago filed a patent for ranking search results by trust. The groundbreaking idea behind the patent is that user behavior can be used as a starting point for developing a ranking signal.

The big idea behind the patent is that the Internet is full of websites all linking to and commenting about each other. But which sites are trustworthy? Google’s solution is to utilize user behavior to indicate which sites are trusted and then use the linking and content on those sites to reveal more sites that are trustworthy for any given topic.

PageRank is basically the same thing only it begins and ends with one website linking to another website. The innovation of Google’s trust ranking patent is to put the user at the start of that trust chain like this:

User trusts X Websites > X Websites trust Other Sites > This feeds into Google as a ranking signal

The trust originates from the user and flows to trust sites that themselves provide anchor text, lists of other sites and commentary about other sites.

That, in a nutshell, is what Google’s trust-based ranking algorithm is about.

The deeper insight is that it reveals Google’s groundbreaking approach to letting users be a signal of what’s trustworthy. You know how Google keeps saying to create websites for users? This is what the trust patent is all about, putting the user in the front seat of the ranking algorithm.

Google’s Trust And Ranking Patent

The patent was coincidentally filed around the same period that Yahoo and Stanford University published a Trust Rank research paper which is focused on identifying spam pages.

Google’s patent is not about finding spam. It’s focused on doing the opposite, identifying trustworthy web pages that satisfy the user’s intent for a search query.

How Trust Factors Are Used

The first part of any patent consists of an Abstract section that offers a very general description of the invention that that’s what this patent does as well.

The patent abstract asserts:

Here’s what the Abstract says:

“A search engine system provides search results that are ranked according to a measure of the trust associated with entities that have provided labels for the documents in the search results.

A search engine receives a query and selects documents relevant to the query.

The search engine also determines labels associated with selected documents, and the trust ranks of the entities that provided the labels.

The trust ranks are used to determine trust factors for the respective documents. The trust factors are used to adjust information retrieval scores of the documents. The search results are then ranked based on the adjusted information retrieval scores.”

As you can see, the Abstract does not say who the “entities” are nor does it say what the labels are yet, but it will.

Field Of The Invention

The next part is called the Field Of The Invention. The purpose is to describe the technical domain of the invention (which is information retrieval) and the focus (trust relationships between users) for the purpose of ranking web pages.

Here’s what it says:

“The present invention relates to search engines, and more specifically to search engines that use information indicative of trust relationship between users to rank search results.”

Now we move on to the next section, the Background, which describes the problem this invention solves.

Background Of The Invention

This section describes why search engines fall short of answering user queries (the problem) and why the invention solves the problem.

The main problems described are:

This is how the patent explains it:

“An inherent problem in the design of search engines is that the relevance of search results to a particular user depends on factors that are highly dependent on the user’s intent in conducting the search—that is why they are conducting the search—as well as the user’s circumstances, the facts pertaining to the user’s information need.

Thus, given the same query by two different users, a given set of search results can be relevant to one user and irrelevant to another, entirely because of the different intent and information needs.”

Next it goes on to explain that users trust certain websites that provide information about certain topics:

“…In part because of the inability of contemporary search engines to consistently find information that satisfies the user’s information need, and not merely the user’s query terms, users frequently turn to websites that offer additional analysis or understanding of content available on the Internet.”

Websites Are The Entities

The rest of the Background section names forums, review sites, blogs, and news websites as places that users turn to for their information needs, calling them vertical knowledge sites. Vertical Knowledge sites, it’s explained later, can be any kind of website.

The patent explains that trust is why users turn to those sites:

“This degree of trust is valuable to users as a way of evaluating the often bewildering array of information that is available on the Internet.”

To recap, the “Background” section explains that the trust relationships between users and entities like forums, review sites, and blogs can be used to influence the ranking of search results. As we go deeper into the patent we’ll see that the entities are not limited to the above kinds of sites, they can be any kind of site.

Patent Summary Section

This part of the patent is interesting because it brings together all of the concepts into one place, but in a general high-level manner, and throws in some legal paragraphs that explain that the patent can apply to a wider scope than is set out in the patent.

The Summary section appears to have four sections:

Here’s a nutshell explanation of how the system works:

Here’s an abbreviated version of the third part of the Summary that gives an idea of the inner workings of the invention:

“A user provides a query to the system…The system retrieves a set of search results… The system determines which query labels are applicable to which of the search result documents. … determines for each document an overall trust factor to apply… adjusts the …retrieval score… and reranks the results.”

Here’s that same section in its entirety:

The above is a general description of the invention.

The next section, called Detailed Description, deep dives into the details. At this point it’s becoming increasingly evident that the patent is highly nuanced and can not be reduced to simple advice similar to: “optimize your site like this to earn trust.”

A large part of the patent hinges on a trust button and an advanced search query:  label:

Neither the trust button or the label advanced search query have ever existed. As you’ll see, they are quite probably stand-ins for techniques that Google doesn’t want to explicitly reveal.

Detailed Description In Four Parts

The details of this patent are located in four sections within the Detailed Description section of the patent. This patent is not as simple as 99% of SEOs say it is.

These are the four sections:

  1. System Overview
  2. Obtaining and Storing Trust Information
  3. Obtaining and Storing Label Information
  4. Generated Trust Ranked Search Results

The System Overview is where the patent deep dives into the specifics. The following is an overview to make it easy to understand.

System Overview

1. Explains how the invention (a search engine system) ranks search results based on trust relationships between users and the user-trusted entities who label web content.

2. The patent describes a “trust button” that a user can click that tells Google that a user trusts a website or trusts the website for a specific topic or topics.

3. The patent says a trust related score is assigned to a website when a user clicks a trust button on a website.

4. The trust button information is stored in a trust database that’s referred to as #190.

Here’s what it says about assigning a trust rank score based on the trust button:

“The trust information provided by the users with respect to others is used to determine a trust rank for each user, which is measure of the overall degree of trust that users have in the particular entity.”

Trust Rank Button

The patent refers to the “trust rank” of the user-trusted websites. That trust rank is based on a trust button that a user clicks to indicate that they trust a given website, assigning a trust rank score.

The patent says:

“…the user can click on a “trust button” on a web page belonging to the entity, which causes a corresponding record for a trust relationship to be recorded in the trust database 190.

In general any type of input from the user indicating that such as trust relationship exists can be used.”

The trust button has never existed and the patent quietly acknowledges this by stating that any type of input can be used to indicate the trust relationship.

So what is it? I believe that the “trust button” is a stand-in for user behavior metrics in general, and site visitor data in particular. The patent Claims section does not mention trust buttons at all but does mention user visitor data as an indicator of trust.

Here are several passages that mention site visits as a way to understand if a user trusts a website:

“The system can also examine web visitation patterns of the user and can infer from the web visitation patterns which entities the user trusts. For example, the system can infer that a particular user trust a particular entity when the user visits the entity’s web page with a certain frequency.”

The same thing is stated in the Claims section of the patent, it’s the very first claim they make for the invention:

“A method performed by data processing apparatus, the method comprising:
determining, based on web visitation patterns of a user, one or more trust relationships indicating that the user trusts one or more entities;”

It may very well be that site visitation patterns and other user behaviors are what is meant by the “trust button” references.

Labels Generated By Trusted Sites

The patent defines trusted entities as news sites, blogs, forums, and review sites, but not limited to those kinds of sites, it could be any other kind of website.

Trusted websites create references to other sites and in that reference they label those other sites as being relevant to a particular topic. That label could be an anchor text. But it could be something else.

The patent explicitly mentions anchor text only once:

“In some cases, an entity may simply create a link from its site to a particular item of web content (e.g., a document) and provide a label 107 as the anchor text of the link.”

Although it only explicitly mentions anchor text once, there are other passages where it anchor text is strongly implied, for example, the patent offers a general description of labels as describing or categorizing the content found on another site:

“…labels are words, phrases, markers or other indicia that have been associated with certain web content (pages, sites, documents, media, etc.) by others as descriptive or categorical identifiers.”

Labels And Annotations

Trusted sites link out to web pages with labels and links. The combination of a label and a link is called an annotation.

This is how it’s described:

“An annotation 106 includes a label 107 and a URL pattern associated with the label; the URL pattern can be specific to an individual web page or to any portion of a web site or pages therein.”

Labels Used In Search Queries

Users can also search with “labels” in their queries by using a non-existent “label:” advanced search query. Those kinds of queries are then used to match the labels that a website page is associated with.

This is how it’s explained:

“For example, a query “cancer label:symptoms” includes the query term “cancel” and a query label “symptoms”, and thus is a request for documents relevant to cancer, and that have been labeled as relating to “symptoms.”

Labels such as these can be associated with documents from any entity, whether the entity created the document, or is a third party. The entity that has labeled a document has some degree of trust, as further described below.”

What is that label in the search query? It could simply be certain descriptive keywords, but there aren’t any clues to speculate further than that.

The patent puts it all together like this:

“Using the annotation information and trust information from the trust database 190, the search engine 180 determines a trust factor for each document.”

Takeaway:

A user’s trust is in a website. That user-trusted website is not necessarily the one that’s ranked, it’s the website that’s linking/trusting another relevant web page. The web page that is ranked can be the one that the trusted site has labeled as relevant for a specific topic and it could be a web page in the trusted site itself. The purpose of the user signals is to provide a starting point, so to speak, from which to identify trustworthy sites.

Experts Are Trusted

Vertical Knowledge Sites, sites that users trust, can host the commentary of experts. The expert could be the publisher of the trusted site as well. Experts are important because links from expert sites are used as part of the ranking process.

Experts are defined as publishing a deep level of content on the topic:

“These and other vertical knowledge sites may also host the analysis and comments of experts or others with knowledge, expertise, or a point of view in particular fields, who again can comment on content found on the Internet.

For example, a website operated by a digital camera expert and devoted to digital cameras typically includes product reviews, guidance on how to purchase a digital camera, as well as links to camera manufacturer’s sites, new products announcements, technical articles, additional reviews, or other sources of content.

To assist the user, the expert may include comments on the linked content, such as labeling a particular technical article as “expert level,” or a particular review as “negative professional review,” or a new product announcement as ;new 10MP digital SLR’.”

Links From Expert Sites

Links and annotations from user-trusted expert sites are described as sources of trust information:

“For example, Expert may create an annotation 106 including the label 107 “Professional review” for a review 114 of Canon digital SLR camera on a web site “www.digitalcameraworld.com”, a label 107 of “Jazz music” for a CD 115 on the site “www.jazzworld.com”, a label 107 of “Classic Drama” for the movie 116 “North by Northwest” listed on website “www.movierental.com”, and a label 107 of “Symptoms” for a group of pages describing the symptoms of colon cancer on a website 117 “www.yourhealth.com”.

Note that labels 107 can also include numerical values (not shown), indicating a rating or degree of significance that the entity attaches to the labeled document.

Expert’s web site 105 can also include trust information. More specifically, Expert’s web site 105 can include a trust list 109 of entities whom Expert trusts. This list may be in the form of a list of entity names, the URLs of such entities’ web pages, or by other identifying information. Expert’s web site 105 may also include a vanity list 111 listing entities who trust Expert; again this may be in the form of a list of entity names, URLs, or other identifying information.”

Inferred Trust

The patent describes additional signals that can be used to signal (infer) trust. These are more traditional type signals like links, a list of trusted web pages (maybe a resources page?) and a list of sites that trust the website.

These are the inferred trust signals:

“(1) links from the user’s web page to web pages belonging to trusted entities;
(2) a trust list that identifies entities that the user trusts; or
(3) a vanity list which identifies users who trust the owner of the vanity page.”

Another kind of trust signal that can be inferred is from identifying sites that a user tends to visit.

The patent explains:

“The system can also examine web visitation patterns of the user and can infer from the web visitation patterns which entities the user trusts. For example, the system can infer that a particular user trusts a particular entity when the user visits the entity’s web page with a certain frequency.”

Takeaway:

That’s a pretty big signal and I believe that it suggests that promotional activities that encourage potential site visitors to discover a site and then become loyal site visitors can be helpful. For example, that kind of signal can be tracked with branded search queries. It could be that Google is only looking at site visit information but I think that branded queries are an equally trustworthy signal, especially when those queries are accompanied by labels… ding, ding, ding!

The patent also lists some kind of out there examples of inferred trust like contact/chat list data. It doesn’t say social media, just contact/chat lists.

Trust Can Decay or Increase

Another interesting feature of trust rank is that it can decay or increase over time.

The patent is straightforward about this part:

“Note that trust relationships can change. For example, the system can increase (or decrease) the strength of a trust relationship for a trusted entity. The search engine system 100 can also cause the strength of a trust relationship to decay over time if the trust relationship is not affirmed by the user, for example by visiting the entity’s web site and activating the trust button 112.”

Trust Relationship Editor User Interface

Directly after the above paragraph is a section about enabling users to edit their trust relationships through a user interface. There has never been such a thing, just like the non-existent trust button.

This is possibly a stand-in for something else. Could this trusted sites dashboard be Chrome browser bookmarks or sites that are followed in Discover? This is a matter for speculation.

Here’s what the patent says:

“The search engine system 100 may also expose a user interface to the trust database 190 by which the user can edit the user trust relationships, including adding or removing trust relationships with selected entities.

The trust information in the trust database 190 is also periodically updated by crawling of web sites, including sites of entities with trust information (e.g., trust lists, vanity lists); trust ranks are recomputed based on the updated trust information.”

What Google’s Trust Patent Is About

Google’s Search Result Ranking Based On Trust patent describes a way of leveraging user-behavior signals to understand which sites are trustworthy. The system then identifies sites that are trusted by the user-trusted sites and uses that information as a ranking signal. There is no actual trust rank metric, but there are ranking signals related to what users trust. Those signals can decay or increase based on factors like whether a user still visits those sites.

The larger takeaway is that this patent is an example of how Google is focused on user signals as a ranking source, so that they can feed that back into ranking sites that meet their needs. This means that instead of doing things because “this is what Google likes,” it’s better to go even deeper and do things because users like it. That will feed back to Google through these kinds of algorithms that measure user behavior patterns, something we all know Google uses.

Featured Image by Shutterstock/samsulalam



Source link


Gray Media has promoted Jessica Laszewski to general manager of WSAW, the CBS affiliate in Wausau, Wisconsin.

Laszewski has been the news director at Gray Television’s WMTV in Madison, Wisconsin since October 2017.

Gray said that under her leadership, WMTV rose to number one in news viewership and underwent a successful digital transformation that strengthened its multi-platform presence. Her newsroom’s commitment to excellence was recognized with the prestigious RTDNA Regional Edward R. Murrow Award for Overall Excellence in 2025, 2024, and 2022.

Laszewski has also worked as news director at Gray’s WSAW (CBS) in Wausau, Wisconsin, and at WNDU (NBC) in South Bend, Indiana. Earlier in her career, she held producing and executive producing roles at WMTV (NBC) in Madison, WEAU (NBC) in Eau Claire, WBAY (ABC) in Green Bay, and WISC (CBS) in Madison.

She starts July 21, 2025.



Source link



To download the accompanying files for ImagineFX issue 255, head to this link and click download. Scroll down for Eric Messinger’s excellent video workshop.

Please note: if you have any trouble downloading the file, right-click the link and open it in a new browser window. Next, click in the URL address line to select all of the link, and press Return to start the download.

You may like

Daily design news, reviews, how-tos and more, as picked by the editors.



Source link


A recent discussion on Google’s Search Off the Record podcast challenges long-held assumptions about technical SEO, revealing that most top-ranking websites don’t use valid HTML.

Despite these imperfections, they continue to rank well in search results.

Search Advocate John Mueller and Developer Relations Engineer Martin Splitt referenced a study by former Google webmaster Jens Meiert, which found that only one homepage among the top 200 websites passed HTML validation tests.

Mueller highlighted:

“0.5% of the top 200 websites have valid HTML on their homepage. One site had valid HTML. That’s it.”

He described the result as “crazy,” noting that the study surprised even developers who take pride in clean code.

Mueller added:

“Search engines have to deal with whatever broken HTML is out there. It doesn’t have to be perfect, it’ll still work.”

When HTML Errors Matter

While most HTML issues are tolerated, certain technical elements, such as metadata, must be correctly implemented.

Splitt said:

“If something is written in a way that isn’t HTML compliant, then the browser will make assumptions.”

That usually works fine for visible content, but can fail “catastrophically” when it comes to elements that search engines rely on.

Mueller said:

“If [metadata] breaks, then it’s probably not going to do anything in your favor.”

SEO Is Not A Technical Checklist

Google also challenged the notion that SEO is a box-ticking exercise for developers.

Mueller said:

“Sometimes SEO is also not so much about purely technical things that you do, but also kind of a mindset.”

Splitt said:

“Am I using the terminology that my potential customers would use? And do I have the answers to the things that they will ask?”

Naming things appropriately, he said, is one of the most overlooked SEO skills and often more important than technical precision.

Core Web Vitals and JavaScript

Two recurring sources of confusion, Core Web Vitals and JavaScript, were also addressed.

Core Web Vitals

The podcast hosts reiterated that good Core Web Vitals scores don’t guarantee better rankings.

Mueller said:

“Core Web Vitals is not the solution to everything.”

Mueller added:

“Developers love scores… it feels like ‘oh I should like maybe go from 85 to 87 and then I will rank first,’ but there’s a lot more involved.”

JavaScript

On the topic of JavaScript, Splitt said that while Google can process it, implementation still matters.

Splitt said:

“If the content that you care about is showing up in the rendered HTML, you’ll be fine generally speaking.”

Splitt added:

“Use JavaScript responsibly and don’t use it for everything.”

Misuse can still create problems for indexing and rendering, especially if assumptions are made without testing.

What This Means

The key takeaway from the podcast is that technical perfection isn’t 100% necessary for SEO success.

While critical elements like metadata must function correctly, the vast majority of HTML validation errors won’t prevent ranking.

As a result, developers and marketers should be cautious about overinvesting in code validation at the expense of content quality and search intent alignment.

Listen to the full podcast episode below:



Source link


Microsoft Advertising is revamping its approach to enforcing ad policy compliance. Instead of outright disapproving entire ads, it reviews individual ad assets (e.g., headlines, descriptions, and images).

This move gives advertisers more flexibility and less disruption when policy issues arise. Instead of pulling down full ads, Microsoft can now flag only the problematic elements, allowing the rest of the ad to keep running as long as the minimum number of approved assets remains.

What’s new.

Why we care. Advertisers no longer need to fear that a single image or headline tweak could take an entire ad offline. Instead of pausing ad as a whole due to one disapproved asset, Microsoft Advertising reviews each component individually so approved elements can continue running.

This reduces downtime, preserves performance, and minimizes disruption during edits or policy reviews. It’s a step toward more efficient campaign management with less risk to delivery.

Yes, but. Appeals and disapproval emails at the asset level aren’t available yet but are in the pipeline.

The bottom line is that Microsoft Advertising is taking a smarter, more surgical approach to policy enforcement—keeping ads live and performance intact while still holding every asset accountable.

MarTech is owned by Semrush. We remain committed to providing high-quality coverage of marketing topics. Unless otherwise noted, this page’s content was written by either an employee or a paid contractor of Semrush Inc.

Add MarTech to your Google News feed.    

About the author



Source link


Dreaming of running your own creative studio but unsure if you should gain more experience first? Katie Cadwell how to trust your timing and grow through doing in this week’s Creative Career Conundrums.



Source link


German data protection official Meike Kamp has filed a formal request that Apple and Google remove the DeepSeek app from their respective app stores for the illegal transfer of users’ personal data to China, in violation of European Union law.

Meike Kamp, the Commissioner for Data Protection and Freedom of Information, previously requested in May that DeepSeek voluntarily comply with the legal requirements for data transfer to other countries, stop the transfer of data altogether, or remove their app from the Apple and Google app stores.

Failure to respond to those requests resulted in the official taking the next step of filing a report of illegal content to both Apple and Google who will then examine and decide DeepSeek’s future on their platforms.

The data protection commissioner stated (translated from original German):

“The transfer of user data by DeepSeek to China is unlawful. DeepSeek has not been able to convincingly prove to my authority that data from German users:

I have therefore informed Google and Apple, as operators of the largest app platforms, about the violations and expect a blocking to be checked as soon as possible.”

Takeaways

Germany’s data protection official has formally requested that Apple and Google remove the DeepSeek app from their app stores due to illegal data transfers of German users’ personal information to China. The request follows concerns over Chinese government access to sensitive user data, after DeepSeek failed to comply with EU data protection standards.

Featured Image by Shutterstock/Mijansk786



Source link


While TikTok’s U.S. lifespan remains uncertain, the entertainment app is firmly focused on its future.

“When I met with TikTok last week, they were talking me through their product roadmap for the rest of the year,” said one Cannes Lions attendee, who asked for anonymity to speak candidly about what they discussed during their meeting with TikTok at the festival. 

A big part of that roadmap includes evolving their eight-month-old AI-campaign tool, Smart+, so marketers have more control over how it buys their ads.

“They said they’re going to improve the Smart+ platform by providing advertisers with more control around bidding, targeting and creative, which makes sense, and then the automation sets in after that,” said the exec.

In fact, some of those rollouts are slowly starting to materialize. Last week, (June 23), TikTok introduced a goal-based bidding feature called Smart+ target ROAS (tROAS). 

“The solution offers additional input from the advertiser and more control over bidding, with the intention of driving greater performance and scalability,” said Olivia Picard, director of paid social at Dept, a digital services agency.

Plus, it looks as though targeting via the tool is already in the works behind the scenes as TikTok refines its account level targeting by updating its demographics targeting, and introducing device level targeting, and advanced targeting, which enables reaching audiences based on their likes and behavior.

Another exec, who exchanged anonymity for candor, talked about TikTok’s audience suggestion tool — currently in beta and requires getting listed from account reps — which automatically identifies and prioritizes a brand’s target audience. Though they said they haven’t seen a date when this will be officially launched.

“Advertisers can provide age, gender and custom audience suggestions, and then TikTok will match those profiles first before expanding, giving advertisers increased control over the signals used for audience targeting,” they said.

The same exec pointed to another product being introduced soon: TikTok’s offline event tracking for Smart+ web and catalog campaigns, which lets marketers increase their visibility into TikTok’s impact on driving offline sales.

Taken together, and its clear TikTik’s Smart+ is borrowing its platform peers’ AI-campaign tool equivalents by giving advertisers more options: they can either use it through a set up that lets them establish their objectives and then Smart+ does the actually heavy lifting in terms of campaign management. Or they can opt for a more customized set up where they can pick and choose where they want to leverage Smart+ tools in their campaigns.

“Meta’s Advantage+, Google’s Performance Max, TikTok’s Smart+ — all these platforms are moving towards this AI based approach, with very minimal upfront audience tailoring,” said Chris Matheson, media director at Markacy. “Advertisers really need to ensure they’re doing more analysis on the back end as to how the algorithm is steering to find their customers. That’s something that should be looked into and monitored and corrected primarily through the use of creative.”

That’s especially relevant on TikTok, where more and more inventory — including search — is being funneled into Smart+. 

According to another exec who met with TikTok in Cannes, the platform currently lets marketers exclude search inventory from Smart+ campaigns, but they’ll soon require all advertisers to opt into it by default — akin to how Google operates. In other words, search ads will be bundled into Smart+ campaigns eventually. 

“So it seems like they will do away with this altogether, and advertisers won’t be able to opt out of this placement specifically,” said Matheson. 

In some ways, it’s a double-edged sword: it’s great for TikTok because it forces advertisers to embrace their search ads. But for advertisers, it almost puts them at the mercy of Smart+.

“As an advertiser, I of course always want more control!,” said Jeremy Hull, chief product officer, North America at Brainlabs. “But from TikTok’s perspective, it’s a smart move that will accelerate the adoption of search ads on the platform. It’ll encourage advertisers to focus on the audience, creative and how they use the platform, rather than artificially splitting management / strategy into legacy channel structures.”

These updates come as TikTok’s future in the U.S. remains shrouded in uncertainty. Its U.S. ban extension deadline now rests on Sept. 17. TikTok’s second extension deadline came and went during Cannes Lions, and (unsurprisingly) led to a third extension — this time for 90 days — ordered by President Trump, as he aims to resolve trade deals with China. Still, unphased by any of the will-it-won’t-it back and forth, the longer this continues, the more ingrained TikTok is becoming in U.S. soil.



Source link