April 13, 2025


As a marketer, I wish to know if there are particular issues I ought to do to enhance our LLM visibility that I’m not at the moment doing as a part of my routine advertising and marketing and search engine marketing efforts.

To this point, it doesn’t seem to be it.

There appears to be large overlap in search engine marketing and GEO, such that it doesn’t appear helpful to contemplate them distinct processes.

The issues that contribute to good visibility in search engines like google and yahoo additionally contribute to good visibility in LLMs. GEO appears to be a byproduct of search engine marketing, one thing that doesn’t require devoted or separate effort. If you wish to enhance your presence in LLM output, rent an search engine marketing.

Sidenote.

GEO is “generative engine optimization”, LLMO is “massive language mannequin optimization”, AEO is “reply engine optimization”. Three names for a similar concept.

It’s price unpacking this a bit. So far as my layperson’s understanding goes, there are three principal methods you may enhance your visibility in LLMs:

1. Improve your visibility in coaching knowledge

Massive language fashions are skilled on huge datasets of textual content. The extra prevalent your model is inside that knowledge, and the extra carefully related it appears to be with the matters you care about, the extra seen you can be in LLM output for these given matters.

We are able to’t affect the info LLMs have already skilled on, however we will create extra content material on our core matters for inclusion in future rounds of coaching, each on our web site and third-party web sites.

Creating well-structured content material on related matters is without doubt one of the core tenets of search engine marketing—as is encouraging different manufacturers to reference you inside their content material. Verdict: simply search engine marketing.

2. Improve your visibility in knowledge sources used for RAG and grounding

LLMs more and more use exterior knowledge sources to enhance the recency and accuracy of their outputs. They will search the online, and use conventional search indexes from firms like Bing and Google.

OpenAI’s VP Engineering on Reddit confirming using the Bing index as a part of ChatGPT Search.

It’s truthful to say that being extra seen in these knowledge sources will doubtless enhance visibility within the LLM responses. The method of changing into extra seen in “conventional” search indexes is, you guessed it, search engine marketing.

3. Abuse adversarial examples

LLMs are vulnerable to manipulation, and it’s attainable to trick these fashions into recommending you once they in any other case wouldn’t. These are damaging hacks that provide short-term profit however will in all probability chunk you within the lengthy time period.

That is—and I’m solely half joking—simply black hat search engine marketing.

To summarize these three factors, the core mechanism for bettering visibility in LLM output is: creating related content material on matters your model desires to be related to, each on and off your web site.

That’s search engine marketing.

Now, this might not be true perpetually. Massive language fashions are altering on a regular basis, and there could also be extra divergence between search optimization and LLM optimization as time progresses.

However I think the other will occur. As search engines like google and yahoo combine extra generative AI into the search expertise, and LLMs proceed utilizing “conventional” search indexes for grounding their output, I feel there may be more likely to be much less divergence, and the boundaries between search engine marketing and GEO will develop into even smaller, or nonexistent.

So long as “content material” stays the first medium for each LLMs and search engines like google and yahoo, the core mechanisms of affect will doubtless stay the identical. Or, as somebody commented on one among my latest LinkedIn posts:

“There’s solely so some ways you may shake a stick at aggregating a bunch of data, rating it, after which disseminating your greatest approximation of what the most effective and most correct outcome/information would be.”

Aedan JohnstonAedan Johnston

I shared the above opinion in a LinkedIn publish and acquired some really glorious responses.

Most individuals agreed with my sentiment, however others shared nuances between LLMs and search engines like google and yahoo which can be price understanding—even when they don’t (in my view) warrant creating the brand new self-discipline of GEO:

That is in all probability the largest, clearest distinction between GEO and search engine marketing. Unlinked mentions—textual content written about your model on different web sites—have little or no affect on search engine marketing, however a a lot larger affect on GEO.

Search engines like google and yahoo have some ways to find out the “authority” of a model on a given subject, however backlinks are one of the crucial necessary. This was Google’s core perception: that hyperlinks from related web sites may perform as a “vote” for the authority of the linked-to web site (a.ok.a. PageRank).

LLMs function in another way. They derive their understanding of a model’s authority from phrases on the web page, from the prevalence of specific phrases, the co-occurrence of various phrases and matters, and the context during which these phrases are used. Unlinked content material will additional an LLM’s understanding of your model in a manner that received’t assist a search engine.

As Gianluca Fiorelli writes in his glorious article:

“Model mentions now matter not as a result of they enhance ‘authority’ immediately however as a result of they strengthen the place of the model as an entity throughout the broader semantic community.

When a model is talked about throughout a number of (trusted) sources:

The entity embedding for the model turns into stronger.

The model turns into extra tightly linked to associated entities.

The cosine similarity between the model and associated ideas will increase.

The LLM ‘be taught’ that this model is related and authoritative inside that subject area.”

Gianluca FiorelliGianluca Fiorelli

Many firms already worth off-site mentions, albeit with the caveat that these mentions ought to be linked (and dofollow). Now, I can think about manufacturers enjoyable their definition of a “good” off-site point out, and being happier with unlinked mentions in platforms that go little conventional search profit.

As Eli Schwartz places it,

“On this paradigm, hyperlinks don’t should be hyperlinked (LLMs learn content material) or restricted to conventional web sites. Mentions in credible publications or discussions sparked on skilled networks (good day, information bases and boards) all improve visibility inside this framework.”

Eli SchwartzEli Schwartz

Observe model mentions with Model Radar

You should use our new instrument, Model Radar, to trace your model’s visibility in AI mentions, beginning with AI Overviews.

Enter the subject you wish to monitor, your model (or your opponents’ manufacturers), and see impressions, share of voice, and even particular AI outputs mentioning your model:

I feel the inverse of the above level can also be true. Many firms at the moment construct backlinks on web sites with little relevance to their model, and publish content material with no connection to their enterprise, merely for the visitors it brings (what we now name web site repute abuse).

These techniques supply sufficient search engine marketing profit that many individuals nonetheless deem them worthwhile, however they are going to supply even much less profit for LLM visibility. With none related context surrounding these hyperlinks or articles, they are going to do nothing to additional an LLM’s understanding of the model or enhance the probability of it showing in outputs.

Some content material sorts have comparatively little affect on search engine marketing visibility however higher affect on LLM visibility.

We ran analysis to discover the sorts of pages which can be probably to obtain visitors from LLMs. We in contrast a pattern of pageviews from LLMs and from non-LLM sources, and in contrast the distribution of these pageviews.

We discovered two huge variations: LLMs present a “choice” for core web site pages and paperwork, and a “dislike” for itemizing collections and listings.

Quotation is extra necessary for an LLM than a search engine. Search engines like google and yahoo typically floor info alongside the supply that created it. LLMs decouple the 2, creating an additional have to show the authenticity of no matter declare is being made.

From this knowledge, it appears the vast majority of citations fall into the “core web site pages” class: an internet site’s house web page, pricing web page, or about web page. These are essential elements of an internet site, however not at all times huge contributors to go looking visibility. Their significance appears higher for LLMs.

A slide from my brightonSEO speak exhibiting how AI and non-AI visitors is distributed throughout totally different web page sorts.

Inversely, listings pages—assume huge breadcrumbed Rolodexes of merchandise—which can be created primarily for on-page navigation and search visibility acquired far fewer visits from LLMs. Even when these web page sorts aren’t cited usually, it’s attainable that they may additional an LLM’s understanding of a model due to the co-occurrence of various product entities. However provided that these pages are normally sparse in context, they might have little affect.

Lastly, web site paperwork additionally appear extra necessary for LLMs. Many web sites deal with PDFs and different types of paperwork as second-class residents, however for LLMs, they’re a content material supply like another, they usually routinely cite them of their outputs.

Virtually, I can think about firms treating PDFs and different forgotten paperwork with extra significance, on the understanding that they’ll affect LLM output in the identical manner another web site web page would.

The purpose that LLMs can entry web site paperwork raises an fascinating level. As Andrej Karpathy factors out, there could also be a rising profit to writing paperwork which can be structured firstly for LLMs, and left comparatively inaccessible to folks:

“It’s 2025 and most content material continues to be written for people as a substitute of LLMs. 99.9% of consideration is about to be LLM consideration, not human consideration.

E.g. 99% of libraries nonetheless have docs that principally render to some fairly .html static pages assuming a human will click on by them. In 2025 the docs ought to be a single your_project.md textual content file that’s supposed to enter the context window of an LLM.

Repeat for every little thing.”

Andrej KarpathyAndrej Karpathy

That is an inversion of the search engine marketing adage that we should always write for people, not robots: there could also be a profit to focusing our vitality on making info accessible to robots, and counting on the LLMs to render the knowledge into extra accessible varieties for customers.

On this manner, there are particular info buildings that may assist LLMs appropriately perceive the knowledge we offer.

For instance, Snowflake refers back to the concept of “world doc context”. (H/T to Victor Pan from HubSpot for sharing this text.)

LLMs work by breaking textual content into “chunks”; by including further details about the doc all through the textual content (like firm title and submitting date for monetary textual content), it’s simpler for the LLM to grasp and appropriately interpret every remoted chunk, “boosting QA accuracy from round 50%-60% to the 72%-75% vary.”

Understanding how LLMs course of textual content provides small methods for manufacturers to enhance the probability that LLMs will interpret their content material appropriately.

LLMs additionally practice on novel info sources which have historically fallen outdoors the remit of search engine marketing. As Adam Noonan on X shared with me: “Public GitHub content material is assured to be skilled on however has no affect on search engine marketing.”

Coding is arguably essentially the most profitable use case for LLMs, and builders should make up a sizeable portion of whole LLM customers.

For some firms, particularly these promoting to builders, there could also be a profit to “optimizing” the content material these builders are probably to work together with—knowledgebases, public repos, and code samples—by together with further context about your model or merchandise.

Lastly, as Elie Berreby explains:

“Most AI crawlers don’t render JavaScript. There’s no renderer. Fashionable AI crawlers like these utilized by OpenAI and Anthropic don’t even execute JavaScript. Which means they received’t see content material that’s rendered client-side by JavaScript.”

Elie BerrebyElie Berreby

That is extra of a footnote than a significant distinction, for the easy cause that I don’t assume this may stay true for very lengthy. This downside was solved by many non-AI net crawlers, and will likely be solved by AI net crawlers in brief order.

However for now, in the event you rely closely on JavaScript rendering, portion of your web site’s content material could also be invisible to LLMs.

Closing ideas

However right here’s the factor: managing indexing and crawling, structuring content material in machine-legible methods, constructing off-page mentions… these all really feel just like the traditional remit of search engine marketing.

And these distinctive variations don’t appear to have manifested in radical variations between most manufacturers’ search visibility and LLM visibility: typically talking, manufacturers that do properly in a single additionally do properly within the different.

Even when GEO does ultimately evolve to require new techniques, SEOs—individuals who spend their careers reconciling the wants of machines and actual folks—are the folks best-placed to undertake them.

So for now, GEO, LLMO, AEO… it’s all simply search engine marketing.





Supply hyperlink

Categories: Digital MarketingTags:

Leave a Comment