On propaganda, perception, and reputation hacking

For the last two decades, SEO has been a battle for position. In the age of agentic AI, it becomes a battle for perception.

When an LLM – or whatever powers your future search interface – decides “who” is trustworthy, useful, or relevant, it isn’t weighing an objective truth. It’s synthesising a reality from fragments of information, patterns in human behaviour, and historical residue. Once the model holds a view, it tends to repeat and reinforce it.

That’s propaganda - and the challenge is ensuring the reality the machine constructs reflects you at your best.

Two ideas help navigate this. 

  • Perception Engineering is the long game: shaping what machines “know” over time by influencing the enduring sources and narratives they ingest. 
  • Reputation Hacking is the nimble, situational work of influencing or correcting narratives in the moment.

Both are forms of propaganda in the machine age – not the crude, deceptive kind, but the careful, factual shaping of how your story is told and retold.

And both matter – because the future of discovery is dynamic and adaptive, but the raw material is often sluggish. And that persistence – what sticks, what lingers, what gets repeated – is where most of the opportunity (and risk) lives.

This brings us to the core difference between human and machine narratives: how they remember. In this game, memory isn’t a passive archive – it’s an active filter, deciding what survives and how it’s retold. Get into the memory for the right reasons, and you can ride the benefits for years; get in for the wrong ones, and the shadow can be just as long.

The perpetual memory of machines

Humans forget. Machines don’t – at least, not in the same way.

When we forget, the edges blur. Details fade, timelines collapse, and the story becomes softer with distance. Machines, by contrast, don’t lose the data; they distil it. Over time, sprawling narratives are boiled down into their most distinctive fragments. That’s why brand histories are rarely undone by a single correction: they’re retold and re-framed until only the high-contrast bits remain.

Models are especially good at this kind of distillation – the scandal becomes the headline; the resolution is relegated to a footnote. In human propaganda, repetition does the work; in machine propaganda, compression and persistence do.

And because the compressed version often becomes the only version a machine recalls at speed, understanding how that memory is formed is crucial.

Two kinds of memory matter here:

  • Training memory: whatever was in the data during the last snapshot. If it was high-profile, repeated, or cited, it picks up “institutional gravity” and is hard to dislodge.
  • Retrieval memory: whatever your agent fetches at runtime – news, documents, databases – and the guardrails that steer how it’s used.

Time decay helps, sometimes. Many systems down-weight stale material so answers feel current. But it’s imperfect. High-visibility events keep their gravity, and low-visibility corrections don’t always get the same reach.

There’s also the “lingering association” problem: co-reference (“old name a.k.a. new name”) or categorical links (“company X, part of scandal Y”) keep the old framing alive in perpetuity. In human terms, it’s like being introduced at a party with a two-year-old anecdote you’d rather forget.

The point isn’t that machines never forget – it’s that they forget selectively, in ways that don’t automatically favour the most recent or most accurate version of your story.

Positive PR as a self-fulfilling loop

If memory can haunt, it can also help.

Language travels. In the best kind of propaganda, it’s the flattering, accurate turn of phrase that does the rounds. When a respected outlet coins one, it doesn’t stay put.

It turns up in analyst notes, conference decks, product reviews, and investor briefings. The repetition turns it into a linguistic anchor – the default way to describe you, even for people who’ve never read the original.

Behaviour travels too. If people expect you to be good, they act accordingly: they search for you by name, click you first, stick around longer, and talk about you in more positive terms. None of that proves you’re the best, but it creates data patterns that make you look like the best to systems that learn from aggregate behaviour.

The loop is subtle: positive framing → positive behaviour → positive framing. It’s not instant, but once established, it can be self-reinforcing for years.

In this context, Perception Engineering is about identifying the phrases, framings, and narratives you’d want to see repeated indefinitely – and ensuring they originate in credible, durable sources. Reputation Hacking, on the other hand, is about spotting those moments in the wild – a conference panel soundbite, a glowing product comparison – and nudging them into places where they’ll be picked up, cited, and echoed.

The trick isn’t to plant advertising copy in disguise; it’s to seed clear, accurate, and repeatable language that works for you when it’s stripped of context and paraphrased by a machine.

The weaponisation of perception

Any system that can be shaped can be distorted. And in an environment where narrative persistence is the real prize, some will try.

Defensive propaganda starts with recognising the quiet ways bias enters the record: selective data, tendentious summaries, strategic omissions. These aren’t always illegal. They’re rarely obvious. But once embedded – especially in formats with long shelf lives – they can tilt the machine’s memory for years.

Weaponisation doesn’t have to look like a smear campaign. It can be as subtle as redefining a term in a trade publication, repeatedly pairing a competitor’s name with an unflattering comparison, or supplying an “expert quote” that’s technically accurate but engineered to leave the wrong impression. Even the order of information can create a lasting skew.

The danger isn’t only in outright falsehoods. Once a distortion is repeated and cited, it becomes part of the machine’s “truth set” – and because models reconcile contradictions into one coherent narrative, the detail they keep is often the one with the sharpest edge, not the one that’s most correct.

The countermeasure is simple, if not easy: make the accurate version so abundant, consistent, and easy to cite that it outweighs the distortion. If there’s going to be a gravitational centre, you want it to be yours.

We’ve seen shades of this in human media ecosystems for decades:

  • A decades-old product recall still mentioned in “history” sections long after the issue was resolved.
  • Industry rankings where the methodology favours one business model over another, subtly reshaping market perception.
  • Persistent category definitions that exclude certain players altogether, not because they’re irrelevant, but because the earliest, most visible framing said so.

Pretending this doesn’t happen is naïve. Copying it is reckless. The more sensible response is to raise the signal-to-noise ratio in your favour: make the accurate version abundant, consistent, and easy to cite. In other words, counter bad propaganda with better propaganda – a clear, consistent truth that’s hard to compress into anything less flattering.

Neutrality is a story we tell ourselves.

Agents don’t simply “retrieve facts”. They synthesise from priors, recency, safety layers, and whatever they can fetch. Even when they hedge (“some say”), they still decide which “some” count – and that decision shapes the story.

In the blue-link era, we optimised for ranking. In the agent era, we optimise for narrative selection: the frames, sources, and categories that get picked when the machine tells the story of your topic. This is exactly where perception engineering and reputation hacking collide: you can’t guarantee the story will be neutral, but you can influence which stories and definitions the machine has to choose from.

Once a framing is dominant, it creates a gravitational field. Competing narratives struggle to break in, because the model is optimising for coherence as much as correctness. That’s why the first widely cited definition of a category, or the earliest comprehensive guide to a topic, often becomes the anchor – whether or not it’s perfect. Every subsequent mention is then interpreted, consciously or not, through that lens.

The real collapse of neutrality isn’t bias in the political sense. It’s that “the truth” is increasingly whatever the machine can construct most coherently from the material at hand. And coherence rewards whoever got there first, spoke the clearest, or was repeated most often.

Which means if you don’t help define your category – its language, its exemplars, its boundaries – the machine will do it for you, using whatever scraps it can find. Perception engineering ensures those scraps are yours; reputation hacking helps you insert them quickly when the window is open.

Recalibrating the marketing stack

To be successful, you must treat the machine’s worldview as a product you can influence – and as an ongoing propaganda campaign you’re running in plain sight – with editorial standards, governance, and measurement.

That means that you need:

  • Governance: someone owns the brand’s “public record”. Not just the site, but the wider corpus that describes you.
  • Observation: regular belief-testing. Ask top agents the awkward questions you fear customers are asking. Record the answers. Track drift.
  • Editorial: create “sources of record” – durable, citable material that others use to explain you.
  • Change management: when reality changes (new product, leadership, policy), plan the narrative update as a programme, not a press release.
  • Crisis hygiene: have a playbook for fast corrections, long-lived clarifications, and calm follow-ups that age well.

This isn’t new work so much as joined-up work. PR, content, SEO, legal, product. Same orchestra, new conductor.

From ideas to action

The principles we’ve covered – perception engineering and reputation hacking – aren’t abstract labels. They’re two complementary operating modes that inform everything from your editorial process to your crisis comms. Perception engineering sets the long-term gravitational field; reputation hacking is the course correction when reality or risk intrudes.

In practice, they draw from the same toolkit – research, content, partnerships, corrections – but the sequencing, pace, and priority are different. Perception engineering is slow-burning and accumulative; reputation hacking is urgent and surgical.

What follows isn’t “SEO tips” or “PR tricks” – it’s the operationalisation of those two modes. Think of it as building a persistent advantage in the machine’s memory while keeping the agility to steer it when you need to.

Practical applications

The battle for perception isn’t won in the heat of a campaign. It’s won in the quiet, unglamorous maintenance of the record the machine depends on. If its “memory” is the raw material, then perception engineering and reputation hacking are the craft – the fieldwork that keeps that raw material current, coherent, and aligned with your preferred story.

What follows isn’t theory. It’s the operational layer: the things you can do – quietly, methodically – to ensure that when the machine tells your story, it’s working from the version you’d want repeated.

Perception engineering (proactive)

Proactive work is the compound-interest version: it’s slower to show results, but once set, it’s hard to dislodge. This is where you lay down the durable truths, the assets and anchors that will be repeated for years without you having to touch them.

  • Audit the deep web of your brand: Not just your own site, but press releases, partner microsites, supplier portals, open-license repositories, and archived PDFs. Look for outdated product names, superseded logos, retired imagery, and even mismatched colour palettes. Machines will happily pull any of it into their summaries.
  • Maintain staff and leadership profiles: Your own team pages, but also speaker bios on conference sites, partner directories, media appearances, and LinkedIn. An ex-employee still billed as “Head of Innovation” on a high-ranking event page can haunt search summaries for years.
  • Keep organisational clarity: Align public org charts, leadership listings, and governance descriptions across your site, LinkedIn, investor relations, and third-party listings. A machine that sees three different hierarchies will assume the one with the most citations is the “truth” – and it might not be the one you prefer.
  • Refresh high-authority, long-life assets: Identify the logos, diagrams, and “about” text most often re-used by journalists, analysts, and partners. Replace outdated versions in all the places people (and scrapers) are likely to fetch them.
  • Define your narrative anchors: Pick the ideas, phrases, and category definitions you’d like attached to your name for the next five years. Name them well, explain them clearly, and seed them in durable sources – encyclopaedic entries, standards bodies, academic syllabi – not just transient campaign pages.

Perception Engineering (reactive)

Reactive work is about patching holes in the hull before the leak becomes the story. It’s faster, more visible, and sometimes more expensive, because you’re competing with whatever’s already in circulation. The goal isn’t just to fix the record – it’s to do so in a way that ages well and doesn’t keep re-surfacing the old problem.

  • Update the record before the campaign: When something changes – product launch, rebrand, leadership shift – make sure the long-lived references get updated first (Wikipedia, investor materials, industry directories). Campaign assets come second.
  • Clean up legacy debris: Retire or redirect old content that keeps the wrong story alive. Where removal isn’t possible, add clarifying updates so the old version isn’t the only one available to be quoted.

Reputation Hacking (proactive)

This is the “social engineering” of credibility – done ethically. You’re placing the right facts and framings in the high-gravity sources that machines and people alike draw from. Done consistently, it builds a kind of reputational armour.

  • Track the gravitational sources: Identify the handful of third-party sites, writers, or communities that punch above their weight in your category. Maintain an accurate, consistent presence there.
  • Synchronise your language: Ensure spokespeople, PR, product, and content teams are describing the brand in the same terms, so repetition works in your favour – and machines see one coherent narrative, not a jumble of similar-but-different descriptors.

Reputation Hacking (reactive)

This is triage. You can’t always prevent distortions, but you can choose where and how to counter them so the fix lives longer than the fault. It’s also where the temptation to over-correct can backfire; you want a clean resolution, not an endless duel that keeps the bad version alive.

  • Respond where it will linger: When a skewed narrative surfaces, publish the correction or context in the source most likely to be cited next year – not just the one trending today.
  • Offer clarifications that age well: Use timelines, primary data, and named accountability rather than ephemeral rebuttals. Once that’s in the record, resist the temptation to keep stoking the conversation – you want the durable correction, not the endless back-and-forth.

Where to start

The fastest way to see how the machine sees you is to ask it. Pick three or four leading AI search tools and prompt them the way a customer, investor, or journalist might. Don’t just check the facts – listen for tone, framing, and what gets left out.

Then work backwards: which pieces of the public record are feeding those answers? Which of them could you update, clarify, or strengthen today? You don’t have to rewrite your whole history at once. Just start with the handful of durable, high-visibility assets that most shape the summaries – because those will be the roots every new narrative grows from.

Closing the loop

In the old search era, the prize was the click. In the agent era, the prize is the story – and once a version of that story lodges in the machine’s memory, it calcifies. You can chip at it, polish it, add new chapters… but moving the core narrative takes years.

Propaganda, perception engineering, reputation hacking – call it what you like. The point is the same: you’re no longer just marketing to people; you’re marketing to the machines that will introduce you to them.

Ignore that, and you’re effectively letting someone else write your opening paragraph – the one the machine will read aloud forever. Play it well, and your version becomes the one every other retelling has to work to dislodge.

1 Comment
Inline Feedbacks
View all comments

Well worth reading!!! Google.com is returning AI generated information to ordinary searches and giving the choice of seeing more of it. 

Some AI tools might accept payment for returning favorable results on a particular search term, just like some search engines did in the 1990s and maybe still do. 

It seems to me that people must be honest while correcting false information out there about their business/service/product.