Taye Shobajo, Author at The Gradient Group | Page 43 of 115


MarTechCharts regularly highlights data of interest to marketers and marketing operations professionals.

When measured by conversion rate, the average AI search visitor (tracked to a non-Google search source like ChatGPT) is 4.4 times as valuable as the average visit from traditional organic search, according to research by Semrush. (Semrush is parent company of MarTech publisher Third Door Media.)

Semrush predicts that as AI search grows (and traditional search declines) for websites over time, AI channels will drive similar amounts of economic value globally by the end of 2027 and potentially grow far beyond that in later years.

When it comes to conversions, the visitors from AI search show up at websites with a wealth of information supplied by their LLMs. That makes them more prepared to make a purchase decision.

The post Average LLM visitor worth 4.4x organic search visitors appeared first on MarTech.



Source link



Designing for other creatives has got to be one of the hardest tasks out there. It’s a challenge we took on last year when rebranding our very own Brand Impact Awards, and in 2025, JKR has created a new identity for one of the biggest events of the design year, D&AD Awards and Festival.

Everywhere you went at D&AD Festival and Awards, you were surrounded by riffs on the idea of Drawn to Create. The identity centres around the iconic Pencil (how could it not?) and a bold new typeface created with Studio DRAMA, Pencil Gothic. The typeface and identity morphed into different shapes on stage, lent itself to signage, backdrops and was front and centre of the glitzy awards ceremony.

You may like

Daily design news, reviews, how-tos and more, as picked by the editors.



Source link


The new AI Mode tab in Google’s results, currently only active in the U.S., enables users to get an AI-generated answer to their query.

You can ask a detailed question in AI Mode, and Google will provide a summarized answer.

Google AI Mode answer for the question [what are the best ways to grow your calf muscles], providing a detailed summary of exercises and tips (Image Credit: Barry Adams)

Google explains how it generates these answers in some recently published documentation.

The critical process is what Google calls a “query fan-out” technique, where many related queries are performed in the background.

The results from these related queries are collected, summarized, and integrated into the AI-generated response to provide more detail, accuracy, and usefulness.

Having played with AI Mode since its launch, I have to admit it’s pretty good. I get useful answers, often with detailed explanations that give me the information I am looking for. It also means I have less need to click through to cited source websites.

I have to admit that, in many cases, I find myself reluctant to click on a source webpage, even when I want additional information. It’s simpler to ask AI Mode a follow-up question rather than click to a webpage.

Much of the web has become quite challenging to navigate. Clicking on an unknown website for the first time means having to brave a potential gauntlet of cookie-consent forms, email signup pop-ups, app install overlays, autoplay videos, and a barrage of intrusive ads.

The content you came to the page for is frequently hidden behind several barriers-to-entry that the average user will only persist with if they really want to read that content.

And then in many cases, the content isn’t actually there, or is incomplete and not quite what the user was looking for.

AI Mode removes that friction. You get most of the content directly in the AI-generated answer.

You can still click to a webpage, but often it’s easier to simply ask the AI a more specific follow-up question. No need to brave unusable website experiences and risk incomplete content after all.

AI Mode & News

Contrary to AI Overviews, AI Mode will provide summaries for almost any query, including news-specific queries:

AI Mode answer for the [latest news] query (Image Credit: Barry Adams)

Playing with AI Mode, I’ve seen some answers to news-specific queries that don’t even cite news sources, but link only to Wikipedia.

For contrast, the regular Google SERP for the same query features a rich Top Stories box with seven news stories.

With these types of results in AI Mode, the shelf life of news is reduced even further.

Where in search, you can rely on a Top Stories news box to persist for a few days after a major news event, in AI Mode, news sources can be rapidly replaced by Wikipedia links. This further reduces the traffic potential to news publishers.

A Google SERP for [who won roland garros 2025] with a rich Top Stories box vs. the AI Mode answer linking only to Wikipedia (Image Credit: Barry Adams)

There is some uncertainty about AI Mode’s traffic impact. I’ve seen examples of AI Mode answers that provide direct links to webpages in-line with the response, which could help drive clicks.

Google is certainly not done experimenting with AI Mode. We haven’t seen the final product yet, and because it’s an experimental feature that most users aren’t engaged with (see below), there’s not much data on CTR.

As an educated guess, the click-through rate from AI Mode answers to their cited sources is expected to be at least as low, and probably lower, as the CTR from AI Overviews.

This means publishers could potentially see their traffic from Google search decline by 50% or more.

AI Mode User Adoption

The good news is that user adoption of AI Mode appears to be low.

The latest data from Similarweb shows that after an initial growth, usage of the AI Mode tab on Google.com in the U.S. has slightly dipped and now sits at just over 1%.

Data courtesy of Similarweb and Aleyda Solis (Image credit: Barry Adams)

This makes it about half as popular as the News tab, which is not a particularly popular tab within Google’s search results to begin with.

It could be that Google’s users are satisfied with AI Overviews and don’t need expanded answers in AI Mode, or that Google hasn’t given enough visual emphasis to AI Mode to drive a lot of usage.

I suspect that Google may try to make AI Mode more prominent, with perhaps allowing users to click from an AI Overview into AI Mode (the same way you can click from a Top Stories box to the News tab), or integrate it more prominently into their default SERP.

When user adoption of AI Mode increases, the impact will be keenly felt by publishers. Google’s CEO has reiterated their commitment to sending traffic to the web, but the reality appears to contradict that.

In some of their newest documentation about AI, Google strongly hints at diminished traffic and encourages publishers to “[c]onsider looking at various indicators of conversion on your site, be it sales, signups, a more engaged audience, or information lookups about your business.”.

AI Mode Survival Strategies

Broad adoption of AI Mode, whatever form that may take, can have several impactful consequences for web publishers.

Worst case scenario, most Google search traffic to websites will disappear. If AI Mode becomes the new default Google result, expect to see a collapse of clicks from search results to websites.

Focusing heavily on optimizing for visibility in AI answers will not save your traffic, as the CTR for cited sources is likely to be very low.

In my view, publishers have roughly three strategies for survival:

1. Google Discover

Google’s Discover feed may soften the blow somewhat, especially with the rollout onto desktop Chrome browsers.

Expanded presence of Discover on all devices with a Chrome browser gives more opportunities for publishers to be visible and drive traffic.

However, a reliance on Discover as a traffic source can encourage bad habits. Disregarding Discover’s inherent volatility, the unfortunate truth is that clickbait headlines and cheap churnalism do well in the Discover feed.

Reducing reliance on search in favor of Discover is not a strategy that lends itself well to quality journalism.

There’s a real risk that, in order to survive a search apocalypse, publishers will chase after Discover clicks at any cost. I doubt this will result in a victory for content quality.

2. Traffic & Revenue Diversification

Publishers need to grow traffic and income from more channels than just search. Due to Google’s enormous monopoly in search, diversified traffic acquisition has been a challenge.

Google is the gatekeeper of most of the web’s traffic, so of course we’ve been focused on maximising that channel.

With the risk of a greatly diminished traffic potential from Google search, other channels need to pick up the slack.

We already mentioned Discover and its risks, but there are more opportunities for publishing brands to drive readers and growth.

Paywalls seem inevitable for many publishers. While I’m a fan of freemium models, publishers will have to decide for themselves what kind of subscription model they want to implement.

A key consideration is whether your output is objectively worth paying for. This is a question few publishers can honestly answer, so unbiased external opinions will be required to make the right business decision.

Podcasts have become a cornerstone of many publishers’ audience strategies, and for good reason. They’re easy to produce, and you don’t need that many subscribers to make a podcast economically feasible.

Another content format that can drive meaningful growth is video, especially short-form video that has multiplatform potential (YouTube, TikTok, Instagram, Discover).

Email newsletters are a popular channel, and I suspect this will only grow. The way many journalists have managed to grow loyal audiences on Substack is testament to this channel’s potential.

And while social media hasn’t been a key traffic driver for many years, it can still send significant visitor numbers. Don’t sleep on those Facebook open graph headlines (also valuable for Discover).

3. Direct Brand Visits

The third strategy, and probably the most important one, is to build a strong publishing brand that is actively sought out by your audience.

No matter the features that Google or any other tech intermediary rolls out, when someone wants to visit your website, they will come to you directly. Not even Google’s AI Mode would prevent you from visiting a site you specifically ask for.

A brand search for [daily mail] in Google AI Mode provides a link to the site’s homepage at the top of the response (Image credit: Barry Adams)

Brand strength translates into audience loyalty.

A recognizable publisher will find it easier to convince its readers to install their dedicated app, subscribe to their newsletters, watch their videos, and listen to their podcasts.

A strong brand presence on the web is also, ironically, a cornerstone of AI visibility optimization.

LLMs are, after all, regurgitators of the web’s content, so if your brand is mentioned frequently on the web (i.e., in LLMs’ training data), you are more likely to be cited as a source in LLM-generated answers.

Exactly how to build a strong online publishing brand is the real question. Without going into specifics, I’ll repeat what I’ve said many times before: You need to have something that people are willing to actively seek out.

If you’re just another publisher writing the same news that others are also writing, without anything that makes you unique and worthwhile, you’re going to have a very bad time. The worst thing you can be as a publisher is forgettable.

There is a risk here, too. In an effort to cater to a specific target segment, a publisher could fall victim to “audience capture“: Feeding your audience what they want to hear rather than what’s true. We already see many examples of this, to the detriment of factual journalism.

It’s a dangerous pitfall that even the biggest news brands find difficult to navigate.

Optimizing For AI

In my previous article, I wrote a bit about how to optimize for AI Overviews.

I’ll expand on this in future articles with more tips, both technical and editorial, for optimizing for AI visibility.

More Resources:  

This post was originally published on SEO For Google News.

Featured Image: BestForBest/Shutterstock



Source link


Digiday covers the latest from marketing and media at the annual Cannes Lions International Festival of Creativity. More from the series →

As thousands of agency folk grab their passports, sunblock and antacids to head to Cannes next week for the annual Lions fest, Omnicom has quietly hammered out its strategy and message it’s taking to the Croisette. That message is, it’s time to capture the myriad opportunities for live — be it live-streaming, sponsorship of live sports, live shopping, etc. 

Digiday has learned of a number of partnerships Omnicom and its various units (Omnicom Media Group, influencer arm Creo, commerce arm Flywheel) will be announcing next week with major platforms and retail media networks, all with an eye toward better understanding, harnessing and exploiting live opportunities for clients. Several Omnicom executives who spoke with Digiday talked of “the power of live content, conversation and commerce to drive brand growth.”

It’s all based on research Omnicom Media Group conducted that shows live can deliver more for clients thanks to evolving consumer habits and likes. Joanna O’Connell, Omnicom Media Group North America’s chief intelligence officer, spearheaded the research after OMG’s chief product officer Megan Pagliuca ID’ed the topic as one on which to focus. Nearly 1,500 U.S. consumers matched against U.S. Census data were surveyed the last week of April to generate the Rethinking Live study

“There is truly this kind of rebound toward shared experiences,” said O’Connell. “Even as we’ve moved in a direction of so much personalization and so much … time spent alone on screens, people crave togetherness, they crave community, they crave connection. So, ‘live’ in that sense, is even more important than ever, because it’s kind of creating moments that bring people together, and they’re drawn to that.”

“At Philips, we recognize the opportunity that livestreaming and creator-led content unlock for deeper customer engagement,” said Faith Lim, ASEASN digital and media lead for Omnicom client Philips. 

Some stats support Pagliuca’s hunch and O’Connell’s research as expressed in the report — which takes the concept of live far beyond live events like the Oscars or the Super Bowl, and into live-streaming, live shopping on a number of platforms (such as Amazon), second-screen experiences (such as conversations on X) while watching something live. For one, the study revealed that 75% of Gen Z watch social media live-streams while only 57% watch live TV content. In total, the live-stream e-commerce marketplace is estimated to hit almost $20 billion this year

“When we asked people what they associate with live, more people (all U.S. consumers, not a specific age group) now say ‘live streaming on social platforms’ than they will live TV. And that’s bananas,” said O’Connell. 

“Those are just massive numbers,” added Kevin Blazaitis, U.S. president of Creo, Omnicom’s influencer arm.

Three general findings came out of the research, said O’Connell. First, live-streams are the new prime time for younger people. One-third of younger respondents said they watch live-streamed content weekly, and they said it creates shared memories. What Omnicom will announce next week are a series of partnerships with a variety of platforms to harness this for clients, primarily through the influencers and creators who are live-streaming. 

Secondly, co-viewing is more of a thing than ever before, particularly in the esports world. But other examples such as YouTube’s “watch with” feature adds a creator commentary layer to live sports streaming, while Thursday Night Football on Amazon also offers creator-led commentary.

Finally, private channels can’t be overlooked, since they also factor heavily into the shared experience — be it private chats, or membership in clubs. It’s a way for people to communicate with each other one-on-one as they digest shared experiences. Fans watching a Knicks game live on their TV at home might be texting each other with comments. 

“This is more about questioning traditional notions than it is saying don’t do it anymore,” said O’Connell. “Because a live strategy needs to be a lot more expansive and sophisticated. You’ve got to be much more prepared to make real-time decisions. Your budgeting has to be more flexible. You need to be paying attention to social listening so you can get involved in conversation if it’s happening. There’s so much more depth and breadth to it.”

https://digiday.com/?p=580790



Source link


Who are these 20-somethings? What shapes their worlds, and how do they see themselves within it? Across the event, students explored the textures of contemporary life with sincerity, navigating themes of identity, transition, belonging and disquiet. The goal was not to present polished answers but to offer something truer: a multifaceted portrait of youth, made with honesty and complexity.

From the surreal to the intimate, the projects spanned emotional and technical registers. Fragments of Becoming by Riccardo Vacca (IED Roma) used a mix of AI, visual anthropology and analogue photography to explore the liminal space between adolescence and adulthood. In Bianco Latte, Valentina Doddato (IED Milano) created a quiet reverie of memory, where nature and domestic space melt into a landscape of suspended time. Meanwhile, In Between by Sofia Valabrega (IED Torino) captured the stasis and uncertainty that defines so much of young adult life today: the longing to move forward, tempered by the fear of getting it wrong.

Curators and course coordinators Giulia Ticozzi (IED Torino), Carlotta Cattaneo (IED Milano) and Daria Scolamacchia (IED Roma) described the event as “a participatory performance” in which the table served as a stage, inviting physical movement, informal critique and shared reflection. The process was designed to empower students not just to show their work but to actively shape the space around it.

In doing so, The Dreamers became more than an exhibition. It evolved into a live archive, constantly reshaped by those who engaged with it. By placing vulnerability and agency at the centre, IED offered a vision of arts education that feels refreshingly grounded: creative, yes, but also responsive, participatory and deeply human.



Source link


Global advertising expenditure has surpassed the $1 trillion mark for the first time.

Digital advertising continues to dominate this growth, with digital channels encompassing search and social media forecast to account for 72.9% of total ad revenue by the end of the year.

From a platform perspective, Google, Meta, Amazon, and Alibaba are expected to capture more than half of global ad revenues this year.

In-house and agency-side paid media teams are working harder than ever to grow ecommerce businesses efficiently, and the amount of data being used day-to-day (even hour-to-hour) is enormous.

With this growth and investment, something is clearly working, and given that brands can map new/returning audiences to their advertising funnel and serve ads across billions of auctions, it’s a lever that millions of businesses pull.

However, with budgets being split across channels (search, social, out-of-home, etc) and brands using CRM data, analytics platforms, third-party attribution tools, and more to define their “source of truth,” fragmentation begins to appear with reporting. Only 32% of executives feel they fully capitalize on their performance marketing data for this reason.

With data being spread across several sources, ad platforms having different attribution models, and the C-suite likely asking, “Which source of truth is correct?”, reporting paid media performance for ecommerce isn’t the most straightforward task.

This post digs into key performance indicators, platform attribution & modeling, business goals, and how to bring it all together for a holistic view of your advertising efficacy.

Key Performance Indicators (KPIs)

To begin navigating paid media reporting, it starts with the KPIs that each account optimizes towards and how this feeds into channel performance.

Each of these has purpose, benefits, limitations, and practical use cases that should be viewed through a lens of attribution unique to each platform.

Short-Term Performance

Return On Ad Spend (ROAS)

This metric measures the revenue generated for every dollar spent on advertising.

If your total ad cost was $1,000 and you drove $18,500 revenue, your ROAS would be 18.5.

Cost Per Acquisition (CPA)

This metric shows the average cost to generate a sale (or lead, depending on the goal, e.g., an ecommerce brand could be measuring using CPA to sign up new customers for an event).

For example, if your total ad cost was $5,000 and you drove 180 sales, your CPA would be $ 27.77.

Cost Of Sale (CoS)

This metric measures what % of revenue is spent on advertising.

Say a brand spends $20,000 on Meta Ads and generates £100,000 in revenue, their resulting CoS would be 20%.

Mid-Term Efficiency

Customer Acquisition Cost (CAC)

This metric may reflect either marketing costs associated with driving new customer acquisition or a holistic view of all costs associated with acquiring new customers.

Let’s say a business has a CAC of $175 and an AOV of $58, they will need each new customer to repeat purchase ~3x to make acquisition profitable.

Marketing Efficiency Ratio (MER)

This metric shows how efficiently your total ad spend is converting into revenue, regardless of the channel.

Where MER is especially useful is when brands are active on multiple ad networks, all of which contribute in some way to the final sale, and where siloed platform attribution is inconsistent.

Long-Term Strategic

Customer Lifetime Value (CLV Or CLTV)

Used alongside CAC, this metric is essential for understanding the true value of both acquisition and retention, which is important for almost all ecommerce models, and especially important for brands looking to capitalize on repeat purchases and subscription-based models.

So, which one should you be reporting on for your ecommerce brand?

Speaking from experience, there isn’t a right or wrong answer, nor is there a blueprint for which KPIs you should be reporting on.

Having a multifaceted approach will enable more informed decision making, combining short-, medium-, and long-term KPIs to form a holistic model for measuring performance that feeds into your reports.

However, even after choosing your KPIs, different attribution models across advertising platforms add another layer of complexity, as does the ever-evolving customer journey involving multiple touchpoints across devices, channels, etc.

The Ad Platforms

Each ad platform handles attribution and tracking differently.

Take Google Ads, for example, the default model is Data-Driven Attribution (DDA), and when using the Google Ads pixel, only paid channels receive credit.

Then, with a GA4 integration to Google Ads, both paid and organic are eligible to receive credit for sales.

Click-through windows, value, count, etc, can all be customised to provide a view of performance that feeds into your Google Ads campaigns.

Using the Google Ads pixel, say a user clicks a shopping ad, then a search ad, and then returns via organic to make the purchase, 40% of the credit could go to shopping, and 60% to the search ad.

With the GA4 integrated conversion, shopping could receive 30%, search 40%, and organic visit 30%, resulting in 70% of the value being attributed back to the campaigns in-platform.

Now, comparing this to Meta Ads, which uses a seven-day click and one-day view attribution window by default, when a user converts within this time frame, 100% of the credit will be attributed to Meta.

This is why the narrative for conversion tracking on Meta is one of overrepresentation, with brands seeing inflated revenue numbers vs. other channels, even more so with loose audience targeting, where campaign types such as ASC can serve assets to audiences who have already interacted with your brand.

Then, when you dig into third-party analytics, the comparisons between Google Ads, Meta Ads, Pinterest Ads, etc., are almost the complete opposite.

So, what should this data be used for, and how does it factor into the bigger picture?

In-platform metrics are best viewed as directional.

They help optimize within the walls of that specific platform to identify high-performing audiences, auctions, creatives, and placements, but they rarely reflect the true incremental value of paid media to your business.

The data in Google, Meta, Pinterest, etc. is a platform-specific lens on performance, and the goal shouldn’t be to pick one or ignore these metrics.

It should be to interpret these for what they are and how they play into the overarching strategy.

The Bigger Picture

KPIs such as ROAS and CPA offer immediate insights but provide a fragmented view of paid media performance.

To gain a comprehensive understanding, brands must combine medium- to long-term KPIs with broader modeling and tests that account for the multifaceted nature of performance marketing, while considering how complex customer journeys are in this day and age.

Marketing Mix Modeling (MMM)

Introduced in the 1950s, MMM is a statistical analysis that evaluates the effectiveness of marketing channels over time.

By analyzing historical data, MMM helps advertisers understand how different marketing activities contribute to sales and can guide budget allocation.

A 2024 Nielsen study found that 30% of global marketers cite MMM as their preferred method of measuring holistic ROI.

The very short version of how to get started with MMM includes:

  1. Collecting aggregated data (roughly speaking, at least two years of weekly data across all channels, mapped out with every possible variable (e.g., pricing, promotions, weather, social trends, etc.)
  2. Defining the dependent variable, which for ecommerce will be sales or revenue.
  3. Run regression modeling to isolate the contribution of each variable to sales (adjusting for overlaps, lags, etc.)
  4. Analyze, optimize, and report on the coefficients to understand the relative impact and ROI of your paid media activity as whole.

Unlike platform attribution, this doesn’t rely on user-level tracking, which is especially useful with privacy restrictions now and in the future.

From a tactical standpoint, your chosen KPIs will still lead campaign optimizations for your day-to-day management, but at a macro level, MMM will determine where to invest your budget and why.

Incrementality Testing

Instead of relying on attribution models, this uses controlled experiments to isolate the impact of your paid media campaigns on actual business outcomes.

This kind of testing aims to answer the question, “Would these sales have happened without the paid media investment?”.

This involves:

  1. Defining an objective or independent variable (e.g., sales, revenue, etc.)
  2. Creating test and control groups. This could be by audience or geography – one will be exposed to the campaigns and the other will not.
  3. Run the experiment while keeping all conditions equal across both groups.
  4. Compare the outcomes, analyze performance, and calculate the impact.

This isn’t one that’s run every week, but from a strategic point of view, these tests help to validate the actual performance of paid media and direct where and what spend should be allocated across ad platforms.

Operational Factors

These are equally as important (if not more) for ecommerce reporting and absolutely need to be considered when setting KPIs and beginning to think about modeling, testing, etc.

Without considering these factors, brands will use inaccurate data from the get-go.

Think about the impact of buy now, pay later. Providers such as Klarna or Clearpay can lead to higher return rates, as bundle buying and impulsive purchases become more accessible.

Without considering operational factors, using this example and a basic in-platform ROAS, brands would be optimizing toward incorrect checkout data with higher AOV’s and no consideration of returns, restocking, etc.

Ultimately, building a true picture of paid media performance means stepping beyond the platform KPIs and metrics to consider all factors involved and how best to model the data to uncover not just “what” is happening, but “why” it is and how this impacts the wider business.

Bringing It All Together

No single tool or model tells the full story.

You’ll need to compare platform data, internal analytics, and external modeling to build a more reliable view of performance.

The first step is getting watertight KPIs nailed down that consider every possible operational factor so you know the platforms are being fed the correct data, and if you need to modify these based on platform nuances due to differing attribution models, do it.

Once these are nailed down, find a model that you trust and that will show you the holistic impact of your paid media spend on overall business performance.

You could explore the use of third-party attribution tools that aim to blend data together, but even with these, you’ll still require clear and accurate KPIs and reliable tracking.

Then, when it comes to the visual side of reporting, the world is your oyster.

Looker Studio, Tableau, and Datorama are among the long list of well-known platforms, and with most brands using three to four business intelligence tools and 67% of analysts relying on multiple dashboards, don’t stress if you can’t get everything under one lens.

When all of this is executed and made into a priority over the short-term ebbs and flows of paid media performance, this is the point where connecting media spend to profit begins.

More Resources:

Featured Image: Surasak_Ch/Shutterstock



Source link


As FIFA Club World Cup observers fret about ticket sales, Warner Bros. Discovery shares no such concerns about the event’s ads.

TNT Sports has partnered with DAZN on a sublicense for English-language coverage of the event, broadcasting 24 matches on its TNT, TBS, and TruTV channels—with all 63 matches streamed on DAZN from June 14 through July 13.

According to Jon Diament, WBD’s evp of ad sales, ad inventory for the TNT Sports slate of matches is “virtually sold out” heading into its June 14 kickoff between Lionel Messi’s Major League Soccer (MLS) side Inter Miami and Egyptian Premier League team Al Alhy at 8 p.m. ET on TBS.

“The upfront had a lot of live sports increases, and the second quarter of this year has just been terrific,” Diament said. “I think if you’re an advertiser, you’re looking for reach, engagement, the live nature of sports, multiplatform—all those things keep reiterating the value of live sports in the second quarter, and that got us into a sellout position.”

Diament mentioned that TNT Sports already strong ad sales during the quarter from college basketball’s March Madness, both the NHL and NBA playoffs, and the company’s inaugural domestic broadcast of the French Open at Roland-Garros, which not only sold out its ad inventory but saw a 25% increase in viewership from a year earlier. 

Some of that second-quarter success came from official partners of professional leagues gravitating toward playoff season, Diament noted, but also from strong scatter demand for live sports. For TNT Sports’ Club World Cup coverage, for example, ad inventory was picked up by FIFA partners including Visa, Bank of America, and Michelob Ultra—which sponsors post-match coverage and the Superior Player of the Match—but also by other brands including Vanda Pharmaceuticals (halftime and half highlights), DraftKings (pregame odds and promo), Verizon (studio club profiles), Lowe’s, Apple, Heinken, Adidas, Nike, Starbucks, Amica, JP Morgan Chase, and even Mas+ by Messi.

“Most of the FIFA sponsors took advantage of the new tournament,” Diament said. “We have a combination of official partners, but also, because we’re TNT Sports and we’re not DAZN or FIFA, we can go to the overall market.”

Setting the screens

Through the partnership with DAZN, TNT Sports has been able to offer those advertisers more in-game options, including two-box ad presentation (or squeeze back) during player substitutions, water breaks or other pauses in the action, and a game-clock feature that lets a brand take over all on-screen graphics for 10 seconds before putting its logo next to the clock for two minutes.

But TNT Sports sees opportunity in its digital and social media platforms for Bleacher Report and House of Highlights, which enhanced its Roland-Garros coverage a few weeks ago by bringing sports figures like Odell Beckham and Derrick Rose in for cameos and commentary. Diament noted that TNT Sports’ coverage of U.S. Soccer and its work with veteran soccer talent like analysts Luke Wileman, Brian Dunseth, and Melissa Ortiz have made it familiar with soccer’s younger viewership in the U.S. and helped its broadcast and social strategy to a daylong event not dissimilar to March Madness or Roland-Garros.

That monopoly on fans’ attention hasn’t gone unnoticed by event advertisers.

“There’s a lot of push notifications, people signing up for what they’re interested in, they have a good feeling for how to create excitement before and after the matches themselves,” Diament said. “This is one of the great things about social media, it’s 24/7 and it’s just not when the live games are happening, so people are carrying around their phones and that younger demo stays connected.”



Source link



Why you can trust Creative Bloq

Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

Printers get a lot of the spotlight and rightly so. 2D printers have been around for so long that not much has moved on with the technology in a long time. Things are changing, though, and eufyMake is leading the charge with its eufyMake E1 UV printer. Billed as a business in a box, this printer allows you to print images on just about any material you care to think of, from wood to metal. Not only that, but it promises to print on multiple shapes, from the perfectly flat to completely round, like glasses and coffee cups. To make the E1 even more appealing, it can also print with depth, add a gloss finish and much more.

For makers, creators and artists, this opens up a whole world of possibilities, from selling more individual prints to merch, assuming the E1 delivers.

You may like

Inks and cleaning kit are included (Image credit: Rob Redman)

Everything is ready to go out of the box (Image credit: Rob Redman)

Texture printing, with brush stroke generation is very good (Image credit: Rob Redman)

The E1 prints easily on rough or uneven surfaces (Image credit: Rob Redman)



Source link


The launch of ChatGPT blew apart the search industry, and the last few years have seen more and more AI integration into search engine results pages.

In an attempt to keep up with the LLMs, Google launched AI Overviews and just announced AI Mode tabs.

The expectation is that SERPs will become blended with a Large Language Model (LLM) interface, and the nature of how users search will adapt to conversations and journeys.

However, there is an issue surrounding AI hallucinations and misinformation within LLM and Google AI Overview generated results, and it seems to be largely ignored, not just by Google but also by the news publishers it affects.

More worrying is that users are either unaware or prepared to accept the cost of misinformation for the sake of convenience.

Barry Adams is the authority on editorial SEO and works with the leading news publisher titles worldwide via Polemic Digital. Barry also founded the News & Editorial SEO Summit along with John Shehata.

I read a LinkedIn post from Barry where he said:

“LLMs are incredibly dumb. There is nothing intelligent about LLMs. They’re advanced word predictors, and using them for any purpose that requires a basis in verifiable facts – like search queries – is fundamentally wrong.

But people don’t seem to care. Google doesn’t seem to care. And the tech industry sure as hell doesn’t care, they’re wilfully blinded by dollar signs.

I don’t feel the wider media are sufficiently reporting on the inherent inaccuracies of LLMs. Publishers are keen to say that generative AI could be an existential threat to publishing on the web, yet they fail to consistently point out GenAI’s biggest weakness.”

The post prompted me to speak to him in more detail about LLM hallucinations, their impact on publishing, and what the industry needs to understand about AI’s limitations.

You can watch the full interview with Barry on IMHO below, or continue reading the article summary.

Why Are LLMs So Bad At Citing Sources?

I asked Barry to explain why LLMs struggle with accurate source attribution and factual reliability.

Barry responded, “It’s because they don’t know anything. There’s no intelligence. I think calling them AIs is the wrong label. They’re not intelligent in any way. They’re probability machines. They don’t have any reasoning faculties as we understand it.”

He explained that LLMs operate by regurgitating answers based on training data, then attempting to rationalize their responses through grounding efforts and link citations.

Even with careful prompting to use only verified sources, these systems maintain a high probability of hallucinating references.

“They are just predictive text from your phone, on steroids, and they will just make stuff up and very confidently present it to you because that’s just what they do. That’s the entire nature of the technology,” Barry emphasized.

This confident presentation of potentially false information represents a fundamental problem with how these systems are being deployed in scenarios they’re not suited for.

Are We Creating An AI Spiral Of Misinformation?

I shared with Barry my concerns about an AI misinformation spiral where AI content increasingly references other AI content, potentially losing the source of facts and truth entirely.

Barry’s outlook was pessimistic, “I don’t think people care as much about truth as maybe we believe they should. I think people will accept information presented to them if it’s useful and if it conforms with their pre-existing beliefs.”

“People don’t really care about truth. They care about convenience.”

He argued that the last 15 years of social media have proven that people prioritize confirmation of their beliefs over factual accuracy.

LLMs facilitate this process even more than social media by providing convenient answers without requiring critical thinking or verification.

“The real threat is how AI is replacing truth with convenience,” Barry observed, noting that Google’s embrace of AI represents a clear step away from surfacing factual information toward providing what users want to hear.

Barry warned we’re entering a spiral where “entire societies will live in parallel realities and we’ll deride the other side as being fake news and just not real.”

Why Isn’t Mainstream Media Calling Out AI’s Limitations?

I asked Barry why mainstream media isn’t more vocal about AI’s weaknesses, especially given that publishers could save themselves by influencing public perception of Gen AI limitations.

Barry identified several factors: “Google is such a powerful force in driving traffic and revenue to publishers that a lot of publishers are afraid to write too critically about Google because they feel there might be repercussions.”

He also noted that many journalists don’t genuinely understand how AI systems work. Technology journalists who understand the issues sometimes raise questions, but general reporters for major newspapers often lack the knowledge to scrutinize AI claims properly.

Barry pointed to Google’s promise that AI Overviews would send more traffic to publishers as an example: “It turns out, no, that’s the exact opposite of what’s happening, which everybody with two brain cells saw coming a mile away.”

How Do We Explain The Traffic Reduction To News Publishers?

I noted research that shows users do click on sources to verify AI outputs, and that Google doesn’t show AI Overviews on top news stories. Yet, traffic to news publishers continues to decline overall.

Barry explained this involves multiple factors:

“People do click on sources. People do double-check the citations, but not to the same extent as before. ChatGPT and Gemini will give you an answer. People will click two or three links to verify.

Previously, users conducting their own research would click 30 to 40 links and read them in detail. Now they might verify AI responses with just a few clicks.

Additionally, while news publishers are less affected by AI Overviews, they’ve lost traffic on explainer content, background stories, and analysis pieces that AI now handles directly with minimal click-through to sources.”

Barry emphasized that Google has been diminishing publisher traffic for years through algorithm updates and efforts to keep users within Google’s ecosystem longer.

“Google is the monopoly informational gateway on the web. So you can say, ‘Oh, don’t be dependent on Google,’ but you have to be where your users are and you cannot have a viable publishing business without heavily relying on Google traffic.”

What Should Publishers Do To Survive?

I asked Barry for his recommendations on optimizing for LLM inclusion and how to survive the introduction of AI-generated search results.

Barry advised publishers to accept that search traffic will diminish while focusing on building a stronger brand identity.

“I think publishers need to be more confident about what they are and specifically what they’re not.”

He highlighted the Financial Times as an exemplary model because “nobody has any doubt about what the Financial Times is and what kind of reporting they’re signing up for.”

This clarity enables strong subscription conversion because readers understand the specific value they’re receiving.

Barry emphasized the importance of developing brand power that makes users specifically seek out particular publications, “I think too many publishers try to be everything to everybody and therefore are nothing to nobody. You need to have a strong brand voice.”

He used the example of the Daily Mail that succeeds through consistent brand identity, with users specifically searching for the brand name with topical searches such as “Meghan Markle Daily Mail” or “Prince Harry Daily Mail.”

The goal is to build direct relationships that bypass intermediaries through apps, newsletters, and direct website visits.

The Brand Identity Imperative

Barry stressed that publishers covering similar topics with interchangeable content face existential threats.

He works with publishers where “they’re all reporting the same stuff with the same screenshots and the same set photos and pretty much the same content.”

Such publications become vulnerable because readers lose nothing by substituting one source for another. Success requires developing unique value propositions that make audiences specifically seek out particular publications.

“You need to have a very strong brand identity as a publisher. And if you don’t have it, you probably won’t exist in the next five to ten years,” Barry concluded.

Barry advised news publishers to focus on brand development, subscription models, and building content ecosystems that don’t rely entirely on Google. That may mean fewer clicks, but more meaningful, higher-quality engagement.

Moving Forward

Barry’s opinion and the reality of the changes AI is forcing are hard truths.

The industry requires honest acknowledgment of AI limitations, strategic brand building, and acceptance that easy search traffic won’t return.

Publishers have two options: To continue chasing diminishing search traffic with the same content that everyone else is producing, or they invest in direct audience relationships that provide sustainable foundations for quality journalism.

Thank you to Barry Adams for offering his insights and being my guest on IMHO.

More Resources: 

Featured Image: Shelley Walsh/Search Engine Journal 



Source link


Most ABM programs run on static data and generic assumptions. Marketers spend countless hours crafting personalized campaigns based on firmographics, technographic, and third-party intent signals—the same data their competitors have access to.

It doesn’t have to be that way. What if the most valuable intelligence for your ABM campaigns is already sitting in your CRM and buried in sales call transcripts?

The new integration of GTM enrichment product Clay and the Gong Revenue AI Platform is the beginning of a shift in how we think about account intelligence and ABM personalization. For the first time, marketers can systematically extract, analyze and operationalize the rich conversation insights that sales teams gather daily. Then, they can use that intelligence to fuel hyper-targeted campaigns at scale.

The intelligence gap in traditional ABM programs

Traditional ABM has a fundamental flaw. Despite all the advances in intent data and predictive analytics, most programs use educated guesses about prospects’ needs. Up to 97% of marketers say ABM delivers a higher ROI than other marketing strategies, recent research shows, but only if done right.

The problem is data quality and relevance. Nearly half of organizations (47%) cite siloed data as their biggest barrier to gaining buyer insight. They target accounts based on industry classifications and company size, then personalize content around generic pain points that may or may not be relevant.

Meanwhile, your sales team is having actual conversations with prospects every day. They’re hearing specific challenges, budget constraints, technical requirements and competitive concerns directly from the buyer’s mouth.

That is the richest possible source of account intelligence. But historically, it’s stayed locked in Gong and been impossible to operationalize at scale—the integration with Clay changes that.

How the integration works

It lets you extract transcripts, identify mentions with Claygent and trigger automations with various data providers. The real power lies in what you can do with this conversation intelligence once it’s in Clay’s enrichment engine.

Clay can:

More importantly, Clay can use those insights to identify lookalike accounts with similar characteristics and apply the same intelligence to your broader target account list.

The integration includes three core capabilities:

Dig deeper: Could AI be what finally aligns marketing and sales teams?

Conversation-driven lookalike targeting

Most marketers are missing the biggest opportunity here. The integration isn’t just about organizing call notes — it’s about using conversation intelligence to improve your ABM targeting strategy.

Let’s say you’re targeting financial services companies with 1,000+ employees. Your sales team has a discovery call with one of these and learns they’re struggling with regulatory compliance automation. They have a Q2 budget allocated for new tech solutions, and they’re currently evaluating three specific competitors.

Clay maps those conversation insights to that account’s profile, and then you can run a lookalike analysis against your entire target account list. Suddenly, you’ve identified 50 other financial services companies with similar characteristics, likely dealing with the same regulatory compliance challenges. You can now target those lookalike accounts with messaging directly addressing the pain points you got from customer conversations.

This is what account-based marketing was supposed to be: Personalized campaigns based on real customer needs rather than demographic stereotypes.

Strategic frameworks for implementation

The most successful implementations follow a systematic approach that puts conversation intelligence at the center of ABM strategy:

Framework 1: The insight capture loop

Framework 2: Signal-based campaign triggers

Instead of running static ABM campaigns, use conversation triggers to launch dynamic sequences:

Framework 3: Multi-thread account penetration

The typical buying group for a complex B2B solution involves six to 10 decision-makers. Use conversation intelligence to map the complete buying committee:

Dig deeper: Is your ABM strategy keeping up with the times?

The future of ABM is conversation-driven

This integration represents something bigger than a new data connector. It creates a foundation for AI-driven ABM. 

This approach recognizes that the most valuable account intelligence doesn’t come from third-party data providers or intent tracking platforms. It comes from actual conversations between your sales team and prospects.

Traditional ABM platforms excel at organizing static data and automating generic outreach. They’re fundamentally limited by the quality of inputs (garbage in, garbage out). When your ABM campaigns are based on assumptions about what accounts care about, you’re competing on the same playing field as everyone else.

AI-driven ABM built around conversations changes this. You’re not guessing what accounts need, you know. You’re not personalizing based on job titles and company size; you’re personalizing based on actual pain points and buying criteria expressed in the customer’s own words.

This is what puts AI to work for marketing in a meaningful way: not generating more content or automating more emails, but surfacing unique insights that can’t be found anywhere else and operationalizing them at scale.

Organizations that embrace AI-driven ABM will have a major advantage. They’ll speak directly to what prospects care about, while their competitors are still guessing based on demographic data.

The future of ABM isn’t about better targeting or more personalization. It’s about building campaigns on the foundation of what customers actually say they need. And for the first time, the integration of Clay and Gong makes that systematically possible.

Dig deeper: How ABM systems are evolving to meet changing B2B buying behaviors

Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.



Source link