인간과 AI 에이전트 모두에게 적합한 랜딩 페이지 만들기
hackernews
|
|
📰 뉴스
#오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
랜딩 페이지를 재설계하면서 사람을 설득하는 HTML과 AI 에이전트를 위한 마크다운이라는 두 가지 모드가 모두 필요하다는 점을 깨달았습니다. 제품이 사람과 AI 시스템 모두를 위한 문서를 지향한다면, 랜딩 페이지 역시 단순한 마케팅 층에 머무르지 않고 AI 에이전트도 활용할 수 있어야 진정한 일관성을 가집니다.
본문
I spent this week redesigning my landing page. The surprising part was not the typography, the footer, or the mobile nav. It was realizing that the page had to work in two completely different modes: persuasive HTML for humans, and boring markdown for agents. That sounds obvious in hindsight. Of course it should work for both. The product I am building is about documentation that works for both humans and AI systems. If the landing page itself only works for humans, that is a weird contradiction. But I think a lot of us still separate these things mentally. We think of the landing page as the glossy marketing layer. We think of llms.txt , markdown routes, and agent access as documentation infrastructure. The more I worked on this, the less convinced I became that those are separate jobs. If someone asks an agent "what does this product do?", or "how much does it cost?", or "is there a CLI?", the answer is probably not coming from your docs reference first. It is often coming from your landing page, your pricing page, your tools page, your blog, and whatever else explains the product in plain language. If those pages are hard to read mechanically, then a decent amount of your product positioning is trapped in human-only HTML. The landing page is part of the interface now There is a simple version of this argument: If more product discovery is happening through AI answers, then the pages that explain the product are part of the API surface whether we like it or not. Not API surface in the strict REST sense. I mean interface surface. They answer questions like: - what problem does this solve? - who is it for? - what do I do first? - is there a free plan? - does this have a CLI? - does this support API docs? For years, landing pages were allowed to be a little hand-wavy because a human reader could fill in the gaps. They could infer things from layout, screenshots, tone, and the general vibe of the page. Agents are worse at that. They are better at following structure. They are better at clear copy. They are better at plain text. They are better when you give them a predictable place to look. That means a lot of normal landing-page habits stop being enough. The first problem was content, not code Once you decide a page should have a machine-readable version, the fluff gets exposed instantly. A human reader might skim past inflated language. A markdown file makes every weak sentence much more obvious. That ended up being useful. It forced me to write the page in a way that was clearer, more literal, and more defensible. I think that is one of the more underrated side effects of building for agents: it is a good test for whether your copy makes sense at all. I did not want a separate "AI site" One tempting solution here is to build a totally different AI-facing microsite or dump a bunch of product facts into llms.txt and call it a day. I did not want that. The maintenance cost is bad, but the bigger problem is conceptual: if the human-facing page and the agent-facing page disagree, which one is real? So I used a simpler rule: The human page stays the canonical experience in the browser. The machine-readable page should be a cleaner representation of the same reality. That led to a structure like this: / stays the normal landing page/index.md becomes the canonical machine-readable landing page/llms.txt serves the same underlying content as/index.md /pricing.md ,/tools.md , and/blog.md do the same for the main supporting routes I also added a visible way for humans to switch to the markdown version, mostly because I dislike hidden infrastructure. If something is important enough to exist, it should usually be visible somewhere. Why I kept both Accept: text/markdown and .md routes In theory, content negotiation should be enough. If a client sends: 1Accept: text/markdown then the same route can return markdown instead of HTML. That part is nice because the URL does not change. A browser asks for HTML, an agent asks for markdown, everyone gets what they want. But the web is never that clean. Some agents send the right headers. Some do not. Some tools are inconsistent. Some users just want a stable URL they can paste into a config file or inspect manually. So I kept both: - explicit .md routes like/pricing.md - content negotiation on the corresponding human routes Nothing especially clever there. The important part was being strict about explicit markdown media types instead of treating curl 's default */* as a signal. That would have caused too many false positives. llms.txt is useful, but it is not enough I still think llms.txt matters. It is a good discovery mechanism. But one thing I keep noticing is that people treat it as if it solves the whole problem. It does not. llms.txt helps an agent discover what exists. It does not automatically make the content of the actual product pages cheap or easy to consume. If someone pastes your pricing page into an agent, or the agent follows a link to the landing page, the thing that matters in that moment is not that llms.txt exists. The thing that matters is whether the page itself can be represented cleanly. That is why I made /llms.txt and /index.md serve the same underlying landing content. I wanted discovery and representation to stay aligned instead of becoming two independent documents with their own drift. The design work and the machine-readable work were the same project I do not think of this as "I redesigned the landing page, and separately I added some AI files." It was one project. The design pass forced the page to be simpler and more legible: - clearer navbar - less clutter in the hero - cleaner CTA structure - more consistent footer resources - better mobile behavior - more honest pricing copy The machine-readable pass forced the underlying content and route structure to be more explicit: - stable markdown URLs - shared canonical content - discovery headers - CLI onboarding path that agents can actually follow Those two things reinforced each other. A simpler human page made the markdown representation better. A better markdown representation made the human copy less sloppy. That is probably the part I did not expect going in. Why I think this matters beyond my site I do not think every marketing site needs to become an AI-first artifact. But I do think more teams are going to discover the same problem: their knowledge-base may be in decent shape, while the pages that explain the product, pricing, setup, and workflow are still inaccessible or expensive for agents to consume. That becomes a weird bottleneck. The docs say one thing. The landing page says another. The pricing page is trapped in a JavaScript-heavy layout. The CLI exists, but the onboarding path is only obvious to a human clicking around. Agents can work around a lot of this. That does not mean they should have to. My guess is that the next normal version of public product pages will have two equally intentional representations: - the designed page for humans - the plain representation for machines Not because of ideology. Just because the distribution surface has changed. What I would do first if I were retrofitting another site If I had to do this again on a different site, I would do it in this order: - fix the copy first - create a canonical markdown version of the landing page - add llms.txt - add markdown routes for pricing and other high-intent pages - add explicit discovery headers - only then bother with fancier switching and UI affordances The order matters because clean content beats clever plumbing. If the page is still vague, markdown just exposes vague text faster. The useful takeaway The practical lesson for me was pretty small: Do not treat your landing page as separate from your agent surface. If the page explains what the product is, how it works, how much it costs, and how to get started, then that page is already part of the input material for agents. You can either make that representation explicit, or let every tool scrape and guess. I would rather make it explicit. That ended up being one of the more satisfying weeks I have had in a while, partly because the result is visible and partly because it removed a contradiction that had been bothering me. The landing page now says DocsAlot is for humans and agents. Now it behaves like it.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유