N°01 — May 2026

Newsletter of may

Agentic Commerce, Multi-CDN, Agent-Spoofing, Trackers & NIS2.

Agentic Commerce

Read

3min

Retail News

When AI redefines the relationship between brands and their customers.

Optimi was present at two unmissable global retail events earlier this year: the One to One Retail E-Commerce in Monaco and NRF 2026 in New York. Two events, two continents, one single topic that dominated every conversation: agentic commerce.

At NRF in New York, the world’s largest retail gathering, bringing together over 40,000 professionals every January, the role of AI in purchasing processes ran through every keynote and roundtable. 

It was no longer a forward-looking topic reserved for “innovation” sessions: it was at the heart of operational discussions, alongside supply chain and user experience customization.

The same tone prevailed at the One to One in Monaco, a European gathering particularly valued by senior decision-makers.

It is in this context that the leaders of major online retailers acknowledged the same shift: the AI agent is establishing itself as the new intermediary between the brand and its customer.

The browsing logic is giving way to a query logic. Consumers no longer choose where to go. They delegate part of the journey to an agent that synthesises, recommends and sometimes purchases.

« Tomorrow, the question will no longer be just about ranking in Google, but about being recommended by artificial intelligences. »

Why does this matter?

Twenty years of SEO built on Google rankings are no longer sufficient to guarantee brand visibility.

Generative Engine Optimisation (GEO) is the defining challenge of the moment: becoming a source that AI models consider reliable and usable.

The mechanics differ from SEO. A search engine ranks pages; an AI model synthesises sources it considers coherent, well-structured, and recognised as authoritative:

  • Clean product data
  • Semantic schemas
  • Dense editorial content
  • Site infrastructure

What Monaco and New York confirmed is that the window for action is narrow. 

Brands that do not invest today in their credibility with AI models risk being absent from them tomorrow. 

Without a solid GEO strategy and robust infrastructure, your brand loses direct traffic and becomes dependent on a new layer of intermediation it does not control.

PERFORMANCE

Read

3min

Multi-CDN

Why Multi-CDN is becoming the standard

In recent years, several major incidents at leading CDN providers caused service outages lasting several hours. Meanwhile, DDoS attack volumes are reaching record levels and the trend is upward.

A CDN (Content Delivery Network) distributes your content, web pages, videos, APIs, as close as possible to your end users, in order to reduce latency and guarantee availability.

The Multi-CDN approach goes further: it involves using several CDN providers in parallel and intelligently distributing traffic between them in real time. Rather than depending on a single provider, you diversify your network to maximise the performance, resilience, and delivery flexibility of your digital content.

⚡ Geographic performance. Routing each user to the fastest CDN at a given moment, from their network zone, typically delivers a 15 to 30% improvement in your LCP (Largest Contentful Paint) score — with a direct impact on conversion rates and Core Web Vitals.

🛡️ Compliance & Resilience. Recent European regulations (DORA for finance, NIS2 for critical infrastructure) and data protection requirements (Schrems II) encourage organisations not to rely on a single provider. Multi-CDN is no longer a technical option — it is a compliance measure: you demonstrate that your service remains available and compliant even if your primary provider suffers an outage or a legal restriction.

The real challenge is not Multi-CDN itself, but how to orchestrate it with precision. 

The ability to route each user intelligently toward the best option at any given moment is where the real added value lies.

This is Optimi’s core expertise: we select the best providers according to each client’s specific profile. 

By ensuring this technical governance, we protect your revenue and brand image, while freeing you from the complexity of managing multiple vendors.

 

Why does this matter?

For an e-commerce site generating €50M per year, a two-hour outage represents a direct revenue loss of up to €15,000. Beyond that immediate cost, your brand image and contractual commitments (SLAs) are put at risk.

Our vendor-agnostic approach allows us to combine the best infrastructures on the market based on each client’s profile, with fine-grained configuration and continuous real-time performance monitoring.

sEcuritY

Read

5min

Agent-spoofing

Agent Spoofing: 80% of retail sites are unprotected

What is agent spoofing?

It is a malicious bot that impersonates an AI agent, a search engine crawler, a price comparison tool, etc., in order to bypass the security of an online retail site. 

By gaining unrestricted access, the bot can, for example, scrape information or manipulate SEO data. With the proliferation of AI agents, this identity spoofing has become the most difficult attack vector to detect.

Key figures

Joint study by DataDome / Botify / AWS / Retail Economics (6,000 consumers in the UK, US, and France), published in late February 2026:

  • 80% of retail sites are not protected against agent spoofing
  • 80% of AI agents do not correctly identify themselves to the sites they visit
  • AI bot activity on retail sites increased fivefold in 2025
  • 38% of consumers use an AI assistant during their online shopping journey

Direct consequences for retailers: distorted analytics, inflated AI referencing signals, biased commercial decisions, and greater exposure to fraud.

The technical reality

The two traditional methods for distinguishing a legitimate agent from an impersonator, the User-Agent header (a text string indicating the client’s name and version) and the IP range (the network address of origin), are no longer sufficient to identify the nature of the connecting user.

  • The User-Agent is simply a self-declaration: any malicious bot can present itself as a known AI agent with a single line of code.
  • AI agents are deployed on shared cloud infrastructures, AWS, Azure, Google Cloud, where thousands of legitimate and malicious services share the same address blocks, which change constantly through dynamic allocation.

The emerging industry response is based on a principle borrowed from digital signatures: HTTP Message Signatures (RFC 9421)

The agent cryptographically signs each outgoing request with a public key exposed at a canonical URL. The origin validates the signature server-side, ensuring the identity of the sender.

For 80% of retailers, now is the time to audit their Bot Management and WAF solutions, identify signed agents, and apply precise security rules based on their nature. 

For merchants engaged in a UCP (Universal Commerce Protocol) approach, these signatures are rapidly becoming the standard of trust.

visibilitY

Read

3min

Tracking server

Why the best-performing sites no longer trust the browser

Ad blockers, browser restrictions, third-party scripts dragging down performance… The classic marketing tracking model is breaking down.

Marketing teams are losing up to a third of their conversion data, tech teams are watching their Core Web Vitals deteriorate, and increasingly precise GDPR questions are being raised about what actually leaves the user’s device.

Server-side tagging addresses all three problems at once.

What is happening today on the browser side

For fifteen years, the pattern has been the same: a GTM snippet on the site, dozens of tags firing from the user’s browser (Google Analytics, Google Ads, Meta, LinkedIn Insight, etc.), and as many requests going directly to third-party platforms.

Three developments have undermined this model:

  • Ad blockers have become widespread. Depending on the audience, between 20% and 45% of visitors now use one. Those visits simply do not exist in your data.
  • Browsers block by default. Safari limits third-party cookies to 24 hours. Firefox does too. Chrome is progressively tightening restrictions. Attribution windows are shrinking.
  • Third-party scripts are expensive in performance terms. A standard marketing tracking stack loads between 15 and 40 external scripts, each adding latency and occupying the browser’s main thread, exactly what Google’s Core Web Vitals measure.

How does server-side tagging work?

The principle is straightforward. 

Instead of firing each tag from the browser, a lightweight client-side script collects events and sends them to a server endpoint you control, typically via a subdomain such as metrics.yoursite.com. 

Your server then handles distribution to Google Analytics 4, the Google Ads Conversion API, Meta, your CRM, your data warehouse, and so on.

 

What this changes in practice:

  • Blockers have no target. Requests originate from your own domain: no googletagmanager.com or facebook.com to intercept. For the browser and extensions, these calls are indistinguishable from normal site traffic. 

  • You control what leaves. Data passes through your infrastructure before reaching third parties: you strip IP addresses, hash emails, and apply your GDPR consent rules from a single location. 

  • More reliable conversion signals. Google and Meta’s server APIs accept data that the browser pixel cannot transmit: exact basket value, customer identifiers, CRM data. More complete signals improve smart bidding quality, and reduce the share of conversions your campaigns simply never see. 

  • Faster pages. Every third-party script loaded in the browser occupies the main thread and delays rendering. By moving distribution logic server-side, you significantly reduce the number of external requests, with a direct impact on LCP and TBT, two Core Web Vitals signals that Google factors into its rankings.

A migration that requires careful planning

Server-side tagging is powerful, but it is not a switch you flip overnight. Infrastructure must be adapted: a container running somewhere, with low latency to your visitors. 

Latency is the key watchpoint, an endpoint hosted in the US serving a European audience adds hundreds of milliseconds to every event. Edge deployment or regional hosting is no longer optional.

Consent must be managed seriously. Moving tracking server-side does not change the GDPR. If anything, it reinforces requirements, since you explicitly become the data controller for the first segment of the flow. Consent management must be wired into the server container, not just the browser.

Finally, a phased migration must be planned. Most teams run both systems in parallel for several weeks to compare data, validate conversions, and switch tags methodically.

The underlying logic

Server-side tagging is part of a broader movement: high-performance web stacks systematically move work out of the browser and into controlled infrastructure. CDNs did it for content delivery. WAFs did it for security. Server-side tagging does it for marketing data.

The teams that extract the most value treat tagging, caching, security, and edge compute as a single subject, not separate projects. 

When your analytics endpoint, CDN, and security layer run on the same infrastructure, latency drops, operations simplify, and compliance becomes easier to demonstrate.

Regulation

Read

3min

NIS2 Directive

The European NIS2 Directive: what you need to know

What is NIS2?

The NIS2 directive (Network and Information Security 2) is the European Union’s new legislative framework for cybersecurity. It entered into force on 16 January 2023, updating and strengthening the previous NIS directive, with the ambition of harmonising the level of cyber protection across all member states, much as the GDPR did for personal data protection.

Why was this update necessary?

In the face of an explosion in cyberattacks, ransomware, data breaches, advanced phishing, zero-day exploits, European institutions judged it essential to raise protection requirements across member states. 

ENISA (the EU Agency for Cybersecurity) has documented an intensification of threats targeting European organisations. 

Member states had until 17 October 2024 to transpose the directive into national law.

Who is affected?

NIS2 applies to all medium and large public and private organisations operating in the EU in critical sectors. The directive distinguishes two categories:

  • Essential Entities (EE): energy, transport, banking, healthcare, water, subject to the strictest requirements.
  • Important Entities (IE): postal services, waste management, chemical industry, food and beverage, digital platforms (search engines, social networks, etc.)

 

What are the concrete obligations?

Article 21 of NIS2 imposes technical and organisational measures proportionate to the risks, including: risk analysis and security policies, incident management, business continuity and crisis management, supply chain security, encryption and cryptography, and multi-factor authentication.

What are the penalties for non-compliance?

Penalties are significant, comparable to those under the GDPR:

  • Essential Entities: up to €10 million or 2% of global turnover.
  • Important Entities: up to €7 million or 1.4% of global turnover

 

The most impacted sectors

Three examples illustrate the scale of change: the healthcare sector, subject to maximum risk management and access control requirements; the retail sector, particularly targeted by ransomware (45% of distributors were victims of an attack in 2025 according to Sophos, with a median ransom demand of $2M); and third-party providers and suppliers, with 75% of global organisations having suffered attacks via their software supply chain in 2025, according to the Gartner report.

In summary

NIS2 marks a major turning point for cybersecurity in European businesses.

Beyond a regulatory obligation, it calls on organisations to adopt a proactive, structured approach to security, notably through the adoption of Zero Trust architectures.

Achieving compliance also means strengthening resilience against an ever-evolving threat landscape.