AI is your newest audience: The B2A(2C) design challenge
AI might be your primary audience, consuming and processing everything, but humans remain the ultimate consumers—they vote, buy products, and make decisions that shape society. Illustration: AI-generated image prompted by Rishad Patel
The machine in the audience
Last month at Tech Week in New York City, I heard variations of the same message from VCs: the market for B2C (Business-to-Consumer) products is tough. But while investors flee consumer-facing media, they're pouring billions into B2A (Business to Agent) in every other sector. Procurement algorithms negotiate supplier contracts. AI assistants book travel and schedule meetings. Trading bots dominate financial markets.
We may be entering a new phase in our information ecosystem where B2A is emerging as a distinct category that could cannibalise traditional B2B models. The old B2B is forking into two paths: B2C (selling to the humans within businesses) and B2A (selling directly to their AI systems). Given tech adoption curves and scale laws, B2A represents the bigger growth sector for information, and one that most media companies haven't even begun to recognise.
The B2A2C pipeline is about expansion, not replacement
Here's the crucial insight: instead of replacing human audiences, AI is an entirely new audience as well as intermediary. If this is true, the real pipeline might be B2A2C: Business to Agent to Consumer. In this model, AI systems would consume vast amounts of information, process it, and then translate it for human consumption.
So this isn't a story of replacement as much as it is about potentially massive expansion. Yes, human attention remains finite, but that's precisely why AI changes the game; it's not competing for human attention, it's creating an entirely new market. Where humans might read 10 sources, their AI agents could process 10,000, creating demand that never existed before. Every human might have dozens of AI agents consuming information on their behalf.
Human-produced content is unlikely to totally disappear, but direct human-to-human media could shrink dramatically due to a powerful economic feedback loop: as more content optimises for AI consumption (cheaper to produce, larger audience), human-optimised content becomes relatively more expensive to create and distribute, pushing it into premium market segments, making it a luxury good.
Of course, we're still early in this transition, and the exact shape of this new ecosystem remains to be seen.
The recent SEO bloodbath, where major publishers saw 50%+ traffic drops as Google shifted to AI-powered search, is just the first tremor of this tectonic shift.
A tale of two intelligences
To understand the B2A2C opportunity, we need to grasp two things: how AI consumes information (B2A), and how it translates that for humans (A2C).
Imagine explaining a news article about inflation to two different audiences.
For humans, you might write "The Federal Reserve's unexpected decision sent shockwaves through markets today, catching traders off-guard and sparking debates about inflation."
For AI, optimal formatting looks like: {"entity": "federal_reserve", "action": "rate_change", "magnitude": 0.25, "direction": "increase", "timestamp": "2025-07-01T14:00:00Z", "market_response": {"sp500": -0.03, "bond_yield_10y": 0.15}, "trader_sentiment": "surprised", "consensus_expectation_delta": 0.25, "confidence": 0.95}
Same baseline information, but here's a critical asymmetry: AI can read human-optimised content, while humans can't directly consume AI-optimised information without sophisticated translation layers. We need narrative context as well as emotional anchors to process information. The byproduct of producing for AI consumption is a one-way street where machines can access everything while humans will need increasingly complex interfaces to access machine-oriented information.
This shift faces deep cultural resistance. My journalist friends hate this idea. Really hate it. “I didn't spend years learning to write stories just to do data entry," one told me. Another called producing for AI "betraying everything that makes journalism matter." They didn't enter the profession to feed machines. They came to tell stories, shape opinion, comfort the afflicted, and afflict the comfortable. Asking them to structure data for AI consumption feels like asking a chef to pour nutrition paste into feeding tubes.
This cultural resistance is real, understandable, and potentially catastrophic. Because while journalists perfect their craft for human readers, technology companies are building parallel infrastructure for AI consumption. The risk isn't replacement, it's irrelevance.
But the recent SEO apocalypse proves adaptation isn't optional. The opportunity lies in reframing this not as "feeding machines" but as "radical accessibility"; making information available to new forms of intelligence.
The translation layer is where power lives
The A2C "translation layer", where AI translates machine-optimised information back to humans, is where the real power shift happens. We're moving from editorial power (choosing which stories to tell) to architectural power (designing how all information flows from machines to human minds).
This power is amplified by a crucial bottleneck: while AI agents can consume information at massive scale, they all converge at the same pinch point—human attention. Your dozens of AI agents might process 10,000 sources, but they still compete for your finite time and focus. Every choice in this translation shapes human understanding: thousands of indicators get translated into "the economy is recovering" or "recession looms." Same data, different narrative choices, vastly different human responses.
This layer is so impactful because it determines not just what humans know, but how they feel about what they know. Whoever controls this translation process doesn't just mediate information; they shape reality itself. AI might be your primary audience, consuming and processing everything, but humans remain the ultimate consumers—they vote, buy products, and make decisions that shape society.
The familiar fear about AI manipulation usually imagines malicious actors deliberately spreading misinformation. We should be clear-eyed: authoritarian regimes can and will use these translation layers as tools of social control, carefully curating what their citizens can know and think. But in more open societies, the threat is different and perhaps more insidious—it's unintentional. The gradual expansion of B2A2C creates such distance between machine-optimised information and human comprehension that translation layers gain unprecedented power over what humans perceive as truth.
This isn't just a technical challenge: it's the design opportunity of a generation. The imperative is clear: Build translation layers that enhance human agency rather than replace it. Make the mediation visible so people understand how machine insights become human stories. Create governance structures that let individuals adjust their own meaning-making. Design for transparency, showing which patterns led to which conclusions. Most importantly, ensure multiple translation options exist so no single system gains monopolistic control over the machine-to-human narrative pipeline.
Whether markets reward this approach remains to be seen. But the companies most likely to endure will be those who build trusted, empowering bridges between machine intelligence and human understanding, because trust, once broken in translation, is nearly impossible to rebuild.
New design spaces
Traditional media optimises for human engagement through brand recognition and emotional resonance. But AI values entirely different signals: provenance chains over brand prestige, update velocity over polish, structural clarity over beautiful prose. These are all design opportunities that cluster into three strategic areas:
1. Trust & Verification: As information becomes liquid and remixable, provenance and integrity become critical. Companies that build robust verification systems and transparency into their translation layers will own user trust. The Content Authenticity Initiative (CAI) shows this in action: Adobe, The New York Times, and others are building cryptographic provenance chains for content. Imagine Reuters or Bloomberg creating systems where every data point carries an auditable trail from primary source through every transformation. Companies like Truepic are already pioneering this for visual media, and efforts around watermarking (for visual media and otherwise) are starting to move beyond copyright debates. The winners will make mediation visible, showing which patterns led to which conclusions.
2. Community & Context: While AI excels at pattern recognition, human communities still need local relevance and cultural nuance. The defensible moat lies in understanding specific communities' realities and facilitating shared understanding. Reddit and OpenAI’s licensing deals hint at the value of community-verified knowledge, but imagine AI systems trained on regional forums to provide hyperlocal context that no global model could match. Or translation layers that understand why Stage 4 load shedding hits Khayelitsha townships differently than Johannesburg suburbs, factoring in backup power access, small business impacts, and how communities organise around the information flows of predictable blackouts. Platforms that blend AI insights with community wisdom (think Nextdoor for machines) become the new public squares.
3. Interface & Experience: The companies that build the most intuitive, transparent translation experiences will own the relationship with end users. Perplexity already experiments with showing sources and reasoning chains. This might mean creating user controls for meaning-making, offering multiple translation options, and designing for understanding over engagement. But imagine browser extensions showing multiple AI translations of the same data side-by-side, letting users see how different models interpret identical information. The future might look like a version of Anthropic's Constitutional AI approach, where users adjust their translation layer's values and priorities (or get an agent to do it). Or forward-thinking regulators in jurisdictions such as the European Union could even enforce a right-to-transparency on translation layer companies.
While competing at the LLM layer requires massive scale and capital, these design spaces could offer defensible positions for innovative media companies. The key is recognising that serving AI audiences doesn't mean abandoning human values, it means finding new ways to deliver them.
The design window
Success in the B2A2C world isn't about choosing between human or machine audiences, it's about designing for the full pipeline. Build APIs and structured data, but never forget the human at the end of the chain. The winners will combine machine efficiency with irreplaceable human value, whether that's trust, community knowledge, or superior translation. Scale won't save you, but differentiation might.
The architectures emerging now will lock in power structures for decades. We have a choice: drift toward monopolistic translation layers where human insight becomes a luxury good, or build intentionally, creating systems that amplify human agency rather than replacing it. The most promising path combines both: AI for scale and pattern detection, humans for meaning and wisdom. But this hybrid future won't happen by accident. Those who move deliberately now will write the rules everyone else follows.
We're already seeing glimpses of another evolution: B2A2A2A—AI systems creating information purely for other AI systems, no humans involved. High-frequency financial trading already runs on this. Autonomous agents are starting to build their own knowledge networks. When machines start having conversations we can't even understand, the translation layers we build today become our only windows into those exchanges. Miss this design window, and you might find yourself locked out of the conversation entirely.
The machines are already in the audience. The question is: Are we designing the system, or is it designing us?