뉴스피드 큐레이션 SNS 대시보드 저널

글쓰기 스타일에 따라 AI를 훈련시키는 방법

hackernews | | 💼 비즈니스
#ai 맞춤 설정 #ai 훈련 #chatgpt #custom instructions #tip #글쓰기 스타일 #프롬프트 엔지니어링

요약

AI가 작성자의 고유한 글쓰기 스타일을 모방하기 위해서는 단순한 예시 문구 입력이나 막연한 '사용자 지정 설정'으로는 부족합니다. 이러한 방식은 단기적인 효과만 있을 뿐이며, 작가 자신도 인식하지 못하는 미묘한 문장 패턴이나 구조적인 특징을 놓치기 쉽습니다. 따라서 다양한 글을 분석해 50개 이상의 구체적 패턴과 단어 선택 선호도 등을 체계적으로 정리한 '구조화된 가이드'를 만들어야 합니다. 이를 시스템 프롬프트로 적용하고 테스트하며 수정 과정을 거치면, AI의 출력 품질을 크게 높이고 작성자의 목소리를 훨씬 더 정확하게 재현할 수 있습니다.

왜 중요한가

개발자 관점

검토중입니다

연구자 관점

검토중입니다

비즈니스 관점

검토중입니다

본문

How to train AI on your writing style Custom instructions capture what you can describe about your writing. The patterns that make you distinctive are the ones you've never articulated. Custom instructions and pasting examples capture the surface of your writing style. To make AI actually sound like you, you need a structured map of your patterns extracted from your real writing. Key takeaways: - Pasting examples into ChatGPT captures surface patterns, then drifts within 10-15 messages. - Custom instructions describe your voice from memory. Reading your own writing for patterns discovers what self-description misses. - A structured voice guide with 50+ concrete patterns outperforms a settings field every time. The prompt that almost works There's a prompt on Reddit that goes something like this: Read these three examples of my writing before you do anything else. Don't write anything yet. First tell me: my tone in three words, something I do consistently that most writers don't, words I never use, how my sentences run. It works. The model reads your samples, identifies real patterns, and the next few outputs land closer to your voice. Then you start a new conversation and the patterns are gone. Three samples also cap what the model can find. You get sentence length and word avoidance, but not argument structure or how your pacing shifts between a tweet and a blog post. After 10-15 messages the examples lose their grip on the model's attention, and the output drifts back to default. Why custom instructions plateau Search "how to make ChatGPT sound like me" and every guide gives the same advice: paste examples into custom instructions, add a line about your tone. Zapier, Copy.ai, Forte Labs all say the same thing. These guides all stop at a paragraph of preferences in a settings field. That's describing your voice from memory. You write down what you think you do: "direct," "uses short sentences," "avoids jargon." This captures what you can consciously articulate. That's a fraction of what makes your writing yours. The rest is patterns below the level of self-description. Which punctuation you avoid entirely, what constructions you reach for when making a concession. You've been doing these things for years without thinking about them. Describing from memory and discovering by reading are two different processes. Custom instructions do the first. And nothing you learn in a single session carries forward. You notice the model keeps using "ensure" when you'd say "make sure." You fix it. Next session, "ensure" is back. The correction was never captured anywhere persistent. The promise of AI writing was speed. The reality is you generate a draft in seconds and spend the next ten minutes editing it back into your voice. You restructure arguments that don't build the way yours do. We broke down the math on that here. How to extract your writing patterns This takes a few hours and the output is a structured document you can use as a system prompt on any model. 1. Collect 5-10 samples per format you write. Tweets, blog posts, emails, whatever you publish regularly. Mix recent work with older pieces so you're capturing stable patterns, not just last week's habits. 2. Read them side by side, looking for specifics. Not "what's my tone" but concrete observations: words you reach for repeatedly, words you never use, how your sentences end, where your analogies come from. Read at the sentence level first, then zoom out to paragraph structure and argument flow. The sentence-level patterns are what custom instructions miss most. 3. Write what you find, organized by category. Word choices in one section, sentence patterns in another. Be specific: "I never use the word 'utilize'" is useful, "I write in a direct tone" is not. Categories to work through: word preferences (use and avoid), sentence endings, punctuation habits, analogy domains, how you handle concessions, how you open and close pieces. 4. Separate what's stable from what shifts by format. Your word choices hold across tweets and long-form, but your sentence pacing changes. Note which patterns are core to everything you write and which adapt by format. Core patterns go into every prompt. Format-specific ones load only when you're writing that format. 5. Test and revise. Paste the full document as a system prompt before a writing task. Generate something, then compare the output against your actual writing. Where the output diverges, your guide is either missing a pattern or describing one too vaguely. Update and test again. A 50-line document of concrete patterns outperforms a 5-line paragraph of adjectives every time. What this catches and what it misses Manual extraction catches a lot. Word avoidance and analogy clustering, sentence rhythm and how you open and close pieces. What it misses is subtler. Some patterns only emerge across 30-50 samples: how your argument density shifts mid-post, or the constructions you use when bridging from evidence to conclusion. The

관련 저널 읽기

전체 보기 →