[{"data":1,"prerenderedAt":594},["ShallowReactive",2],{"blog-post:en:\u002Fen\u002Fblog\u002Fwhy-ai-responds-in-markdown":3},{"id":4,"title":5,"author":6,"body":7,"category":567,"date":568,"description":569,"extension":570,"image":571,"imageAlt":572,"meta":573,"navigation":574,"path":575,"seo":576,"stem":577,"tags":578,"translationKey":579,"__hash__":580,"html":581,"bodyMarkdown":582,"translations":583,"previous":589,"next":578},"blog\u002Fen\u002Fblog\u002F4.why-ai-responds-in-markdown.md","Markdown and AI: why all LLMs adopted this format","Antoine Frankart",{"type":8,"value":9,"toc":556},"minimark",[10,19,22,25,30,33,36,39,92,100,104,107,110,113,117,120,123,126,130,133,136,213,220,223,227,230,361,364,367,371,374,382,385,388,427,430,434,441,448,463,466,469,472,476,479,482,506,509,512,515,519,522,525,528,531,539,542,545,552],[11,12,13,14,18],"p",{},"Ask Claude, ChatGPT, or Gemini to explain a technical concept. Look closely at the answer. There are headers with ",[15,16,17],"code",{},"##",", text surrounded by **, bulleted lists, and other formatting symbols. You are reading Markdown.",[11,20,21],{},"Most people don't realize it. They see well-formatted, clear, spaced-out text. But behind this rendering, there is a syntax that the model produced and that the interface transformed on the fly.",[11,23,24],{},"This is the result of a series of technical decisions, which explain why Markdown became the default output format for all major LLMs, and why a tool designed to read it properly is essential.",[26,27,29],"h2",{"id":28},"what-ai-actually-produces","What AI actually produces",[11,31,32],{},"When you ask an LLM a question, it doesn't generate HTML. It doesn't produce Word, or a PDF. It generates raw text with a lightweight syntax.",[11,34,35],{},"This syntax is Markdown.",[11,37,38],{},"Here is what a typical response from Claude looks like in raw text, before rendering:",[40,41,46],"pre",{"className":42,"code":43,"language":44,"meta":45,"style":45},"language-markdown shiki shiki-themes github-light github-dark","## Steps to configure your environment\n\n1. Install **Node.js** (version 18 or higher)\n2. Clone the repository with `git clone https:\u002F\u002Fgithub.com\u002F...`\n3. Run `npm install` in the project folder\n\n> **Note:** check your version with `node --version` before continuing.\n","markdown","",[15,47,48,56,63,69,75,81,86],{"__ignoreMap":45},[49,50,53],"span",{"class":51,"line":52},"line",1,[49,54,55],{},"## Steps to configure your environment\n",[49,57,59],{"class":51,"line":58},2,[49,60,62],{"emptyLinePlaceholder":61},true,"\n",[49,64,66],{"class":51,"line":65},3,[49,67,68],{},"1. Install **Node.js** (version 18 or higher)\n",[49,70,72],{"class":51,"line":71},4,[49,73,74],{},"2. Clone the repository with `git clone https:\u002F\u002Fgithub.com\u002F...`\n",[49,76,78],{"class":51,"line":77},5,[49,79,80],{},"3. Run `npm install` in the project folder\n",[49,82,84],{"class":51,"line":83},6,[49,85,62],{"emptyLinePlaceholder":61},[49,87,89],{"class":51,"line":88},7,[49,90,91],{},"> **Note:** check your version with `node --version` before continuing.\n",[11,93,94,95,99],{},"What the model wrote: six lines of Markdown syntax. AI ",[96,97,98],"strong",{},"writes"," Markdown, and it's also a format it reads very well.",[26,101,103],{"id":102},"a-brief-history-of-markdown","A brief history of Markdown",[11,105,106],{},"Markdown was created in 2004 by John Gruber and Aaron Swartz with a specific goal: to write text readable by both humans in its raw form and by machines to convert it to HTML. The idea was simple: use existing typographic conventions (asterisks for bold, hashes for headers) rather than inventing a new syntax.",[11,108,109],{},"In its early days, Markdown was mostly a tool for bloggers. Then GitHub adopted it for README files in 2009, and it exploded. In just a few years, the format became the standard for technical documentation, wikis, product specs, and developer notes.",[11,111,112],{},"It is this omnipresence in the development ecosystem, billions of documents, comments, issues, pull requests, that laid the foundation for its predominance with AI.",[26,114,116],{"id":115},"why-llms-naturally-adopted-markdown","Why LLMs naturally adopted Markdown",[11,118,119],{},"Large language models were trained on billions of tokens from the web, GitHub repositories, technical documentation, forums like Stack Overflow and Reddit. A massive part of this corpus is written in Markdown. READMEs, wikis, doc articles, product specifications, blog posts.",[11,121,122],{},"The model didn't \"learn\" to write Markdown via an explicit rule. It internalized it because it's the dominant language in the data it was trained on. When asked to structure an answer, it reproduces the patterns it has seen millions of times.",[11,124,125],{},"This adoption isn't just a matter of style. It is also deeply linked to efficiency.",[26,127,129],{"id":128},"token-efficiency-the-metric-that-explains-everything","Token efficiency: the metric that explains everything",[11,131,132],{},"A token is not a word. It is a unit of text, roughly 4 characters in English. Each request to an LLM costs tokens for input and output. And every token counts, in terms of cost, latency, and context limit.",[11,134,135],{},"Let's compare the same formatting in different formats:",[137,138,139,157],"table",{},[140,141,142],"thead",{},[143,144,145,150,153],"tr",{},[146,147,149],"th",{"align":148},"left","Format",[146,151,152],{"align":148},"Syntax for \"important\" in bold",[146,154,156],{"align":155},"center","Approximate tokens",[158,159,160,174,187,200],"tbody",{},[143,161,162,166,171],{},[163,164,165],"td",{"align":148},"Markdown",[163,167,168],{"align":148},[15,169,170],{},"**important**",[163,172,173],{"align":155},"4",[143,175,176,179,184],{},[163,177,178],{"align":148},"HTML",[163,180,181],{"align":148},[15,182,183],{},"\u003Cstrong>important\u003C\u002Fstrong>",[163,185,186],{"align":155},"10",[143,188,189,192,197],{},[163,190,191],{"align":148},"RTF",[163,193,194],{"align":148},[15,195,196],{},"{\\b important}",[163,198,199],{"align":155},"7",[143,201,202,205,210],{},[163,203,204],{"align":148},"LaTeX",[163,206,207],{"align":148},[15,208,209],{},"\\textbf{important}",[163,211,212],{"align":155},"8",[11,214,215,216,219],{},"This is not insignificant on a long document. Cloudflare explained that switching to Markdown in their LLM pipelines allowed them to ",[96,217,218],{},"reduce token usage by 80%",". This figure is often cited, and it illustrates something real: the verbosity of a format has a direct cost in the LLM economy.",[11,221,222],{},"Markdown is designed to be readable by humans in its raw form. It doesn't have the verbosity of HTML or the complexity of LaTeX. For an LLM that pays for every character in performance and cost, it is the ideal format.",[26,224,226],{"id":225},"markdown-vs-other-formats-the-complete-comparison","Markdown vs other formats: the complete comparison",[11,228,229],{},"Here is what it looks like when we compare the main formats on the criteria that matter for AI usage:",[137,231,232,251],{},[140,233,234],{},[143,235,236,239,241,243,246,249],{},[146,237,238],{"align":148},"Criterion",[146,240,165],{"align":148},[146,242,178],{"align":148},[146,244,245],{"align":148},"JSON",[146,247,248],{"align":148},"Plain text",[146,250,204],{"align":148},[158,252,253,271,290,307,325,342],{},[143,254,255,258,261,264,267,269],{},[163,256,257],{"align":148},"Token efficiency",[163,259,260],{"align":148},"✅ High",[163,262,263],{"align":148},"❌ Low",[163,265,266],{"align":148},"⚠️ Medium",[163,268,260],{"align":148},[163,270,263],{"align":148},[143,272,273,276,279,282,285,287],{},[163,274,275],{"align":148},"Raw readability (human)",[163,277,278],{"align":148},"✅ Excellent",[163,280,281],{"align":148},"❌ Difficult",[163,283,284],{"align":148},"⚠️ Partial",[163,286,278],{"align":148},[163,288,289],{"align":148},"❌ Technical",[143,291,292,295,298,300,302,305],{},[163,293,294],{"align":148},"Semantic structure",[163,296,297],{"align":148},"✅ Yes",[163,299,297],{"align":148},[163,301,297],{"align":148},[163,303,304],{"align":148},"❌ No",[163,306,297],{"align":148},[143,308,309,312,315,318,320,322],{},[163,310,311],{"align":148},"Simple visual rendering",[163,313,314],{"align":148},"✅ Easy",[163,316,317],{"align":148},"✅ Native",[163,319,304],{"align":148},[163,321,304],{"align":148},[163,323,324],{"align":148},"⚠️ Complex",[143,326,327,330,333,335,337,339],{},[163,328,329],{"align":148},"Present in training corpus",[163,331,332],{"align":148},"✅ Massive",[163,334,332],{"align":148},[163,336,284],{"align":148},[163,338,332],{"align":148},[163,340,341],{"align":148},"⚠️ Specialized",[143,343,344,347,349,352,355,358],{},[163,345,346],{"align":148},"Ideal for AI agents",[163,348,297],{"align":148},[163,350,351],{"align":148},"❌ Verbose",[163,353,354],{"align":148},"⚠️ Data only",[163,356,357],{"align":148},"❌ Unstructured",[163,359,360],{"align":148},"❌ Too technical",[11,362,363],{},"Plain text is compact, but it loses all semantic structure. JSON is structured but hard to read without rendering. HTML is rich but too verbose. Markdown occupies the sweet spot: compact, readable, structured.",[11,365,366],{},"It is no coincidence that all major models, like ChatGPT, Claude, Gemini, Mistral, use the Markdown format by default in their responses.",[26,368,370],{"id":369},"how-markdown-helps-ai-reason-better","How Markdown helps AI reason better",[11,372,373],{},"There is a lesser-known aspect of the relationship between LLMs and Markdown: the structure is not only useful for output. It influences the reasoning itself.",[11,375,376,377,381],{},"Several studies show that asking an LLM to structure its response, with headers, numbered steps, lists, improves the quality of reasoning. The model, by structuring, breaks down the problem. It is an implicit form of ",[378,379,380],"em",{},"chain-of-thought",".",[11,383,384],{},"A well-structured response in Markdown is not only easier to read. It is often more correct, because the model was forced to organize its thoughts into distinct sections.",[11,386,387],{},"This is also why effective system prompts often use Markdown. We don't just tell the AI to \"be precise\": we give it a skeleton structure to follow.",[40,389,391],{"className":42,"code":390,"language":44,"meta":45,"style":45},"## Context\nYou are a software architecture expert.\n\n## Expected response format\n- Start with a 2-3 sentence summary.\n- Detail the steps in a numbered list.\n- End with a ## Key takeaways section.\n",[15,392,393,398,403,407,412,417,422],{"__ignoreMap":45},[49,394,395],{"class":51,"line":52},[49,396,397],{},"## Context\n",[49,399,400],{"class":51,"line":58},[49,401,402],{},"You are a software architecture expert.\n",[49,404,405],{"class":51,"line":65},[49,406,62],{"emptyLinePlaceholder":61},[49,408,409],{"class":51,"line":71},[49,410,411],{},"## Expected response format\n",[49,413,414],{"class":51,"line":77},[49,415,416],{},"- Start with a 2-3 sentence summary.\n",[49,418,419],{"class":51,"line":83},[49,420,421],{},"- Detail the steps in a numbered list.\n",[49,423,424],{"class":51,"line":88},[49,425,426],{},"- End with a ## Key takeaways section.\n",[11,428,429],{},"This kind of prompt works because the model recognizes Markdown patterns and naturally integrates them into its generation.",[26,431,433],{"id":432},"the-weak-point-reading","The weak point: reading",[11,435,436,437,440],{},"Everyone talks about Markdown as a tool to ",[378,438,439],{},"write"," prompts. To structure context notes. To organize specs sent to AI.",[11,442,443,444,447],{},"This is true, but there is a flaw: ",[96,445,446],{},"you receive Markdown in return",", and most AI applications or coding tools don't provide a pleasant and efficient interface to read it properly.",[11,449,450,451,454,455,458,459,462],{},"You ask Claude to write a well-structured 3,000-word product spec. It delivers it in Markdown. You copy-paste it into a Word document, the formatting goes up in smoke. You open it in a text editor, you read raw ",[15,452,453],{},"## Title"," and ",[15,456,457],{},"**bold**",". You email it to a colleague, they don't know what to do with the ",[15,460,461],{},".md"," file.",[11,464,465],{},"This is not a Markdown problem. It's a Markdown file reader problem.",[11,467,468],{},"Markdown was never designed to be read in its raw form. It was designed to be rendered. And yet, in AI workflows, outputs are copied into tools that lack even very basic Markdown rendering.",[11,470,471],{},"This is what our Fude app solves: a pleasant and efficient interface to read Markdown. The ability to customize the display, links between files, a table of contents, synchronization across your devices...",[26,473,475],{"id":474},"markdown-and-ai-agents-the-mcp-case","Markdown and AI agents: the MCP case",[11,477,478],{},"With the rise of AI agents and the MCP (Model Context Protocol), the relationship between Markdown and AI takes on a new dimension.",[11,480,481],{},"AI agents read your notes to help you work. They analyze your specs to extract tasks. They scan your documents to answer contextual questions.",[11,483,484,485,487,488,491,492,495,496,499,500,499,502,505],{},"A well-structured ",[15,486,461],{}," file is much easier for an agent to parse than a ",[15,489,490],{},".docx"," or a ",[15,493,494],{},".pdf",". The hierarchy of headers (",[15,497,498],{},"#",", ",[15,501,17],{},[15,503,504],{},"###",") provides an implicit table of contents. Code blocks are clearly delineated. Lists can be parsed unambiguously.",[11,507,508],{},"In practice: if you store your meeting notes, your specs, your brainstorms in Markdown files, you give them the ideal structure to be exploited by your AI agents. No need for an additional processing layer.",[11,510,511],{},"This is the virtuous cycle of Markdown in the AI era: you write in a format the AI understands natively, the AI responds in that same format, and you re-read and reuse this content with the same tools.",[11,513,514],{},"Fude offers a local MCP server that allows you to connect your notes saved in Fude to any compatible AI: Claude, ChatGPT, Cursor, and many others. The AI agent can read your documents, analyze your writing style, or help you draft new content inspired by your existing notes thanks to our MCP server.",[26,516,518],{"id":517},"reading-ai-properly-fudes-goal","Reading AI properly: Fude's goal",[11,520,521],{},"Existing Markdown tools are almost all designed for writing. Obsidian, Notion, Typora, their core value proposition is editing.",[11,523,524],{},"But when you receive an answer from Claude or a spec generated by Codex, you don't need an editor. You need a reader. Something that renders Markdown cleanly, on all your devices, without friction.",[11,526,527],{},"That is exactly Fude's goal. Not an editor. A reader, designed so that the rendered Markdown is pleasant to consult, whether it's an AI-generated spec, a doc article, or meeting notes.",[11,529,530],{},"You connect your existing sources (local files, GitHub, Google Drive), you organize your content into projects, and you read. Cleanly. With Markdown rendering that respects hierarchy, tables, code blocks, and Mermaid diagrams. And with a built-in MCP server, your notes become directly accessible to your AI agents.",[11,532,533,534,381],{},"To understand the vision behind Fude, you can read our article ",[535,536,538],"a",{"href":537},"\u002Fen\u002Fblog\u002Ffude-markdown-reader-ai-era","Fude a Markdown reader for the AI era",[540,541],"hr",{},[11,543,544],{},"Markdown is not just another format. It's the language in which AI thinks and expresses itself. If you use LLMs seriously, to write, analyze, specify, document, you are producing dozens of Markdown files every week, often without knowing it.",[11,546,547,548],{},"📌 ",[535,549,551],{"href":550},"\u002Fen#download","Try Fude, the Markdown reader for the AI era",[553,554,555],"style",{},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":45,"searchDepth":58,"depth":58,"links":557},[558,559,560,561,562,563,564,565,566],{"id":28,"depth":58,"text":29},{"id":102,"depth":58,"text":103},{"id":115,"depth":58,"text":116},{"id":128,"depth":58,"text":129},{"id":225,"depth":58,"text":226},{"id":369,"depth":58,"text":370},{"id":432,"depth":58,"text":433},{"id":474,"depth":58,"text":475},{"id":517,"depth":58,"text":518},"ai","2026-04-18","LLMs output Markdown with every response. It's not a coincidence: token efficiency, semantic structure, training corpus. Here is why all LLMs adopted Markdown as their native format, and what it changes for you.","md","\u002Fimages\u002Fblog\u002Fblog4","Why AI responds in Markdown",{},false,"\u002Fen\u002Fblog\u002Fwhy-ai-responds-in-markdown",{"title":5,"description":569},"en\u002Fblog\u002F4.why-ai-responds-in-markdown",null,"why-ai-responds-in-markdown","SW5mVBxj1WpBGnOh1-c62d3l24vcaAak0iuQ-Jb7W6Y","\u003Cp>\u003Cspan data-fude-source-start=\"1\" data-fude-source-end=\"115\">Ask Claude, ChatGPT, or Gemini to explain a technical concept. Look closely at the answer. There are headers with \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"115\" data-fude-source-end=\"119\">##\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"119\" data-fude-source-end=\"215\">, text surrounded by **, bulleted lists, and other formatting symbols. You are reading Markdown.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"217\" data-fude-source-end=\"411\">Most people don't realize it. They see well-formatted, clear, spaced-out text. But behind this rendering, there is a syntax that the model produced and that the interface transformed on the fly.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"413\" data-fude-source-end=\"605\">This is the result of a series of technical decisions, which explain why Markdown became the default output format for all major LLMs, and why a tool designed to read it properly is essential.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"610\" data-fude-source-end=\"635\">What AI actually produces\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"637\" data-fude-source-end=\"778\">When you ask an LLM a question, it doesn't generate HTML. It doesn't produce Word, or a PDF. It generates raw text with a lightweight syntax.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"780\" data-fude-source-end=\"804\">This syntax is Markdown.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"806\" data-fude-source-end=\"891\">Here is what a typical response from Claude looks like in raw text, before rendering:\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cpre style=\"background-color:var(--fude-code-background, var(--color-code-bg));color:var(--fude-code-foreground, var(--color-code-text))\" tabindex=\"0\" class=\"shiki fude-code-theme fude-code-block\" data-language=\"markdown\" data-fude-code-block-start=\"905\" data-fude-code-block-end=\"1170\">\u003Ccode class=\"language-markdown\" data-language=\"markdown\" data-fude-code-block-start=\"905\" data-fude-code-block-end=\"1170\">\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">## Steps to configure your environment\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">1. Install **Node.js** (version 18 or higher)\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">2. Clone the repository with \u003C\u002Fspan>\u003Cspan style=\"color:var(--fude-code-token-string, color-mix(in srgb, var(--color-accent-primary) 78%, var(--color-code-text) 22%))\">`git clone https:\u002F\u002Fgithub.com\u002F...`\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">3. Run \u003C\u002Fspan>\u003Cspan style=\"color:var(--fude-code-token-string, color-mix(in srgb, var(--color-accent-primary) 78%, var(--color-code-text) 22%))\">`npm install`\u003C\u002Fspan>\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\"> in the project folder\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">> **Note:** check your version with \u003C\u002Fspan>\u003Cspan style=\"color:var(--fude-code-token-string, color-mix(in srgb, var(--color-accent-primary) 78%, var(--color-code-text) 22%))\">`node --version`\u003C\u002Fspan>\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\"> before continuing.\u003C\u002Fspan>\u003C\u002Fspan>\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>\u003Cspan data-fude-source-start=\"1176\" data-fude-source-end=\"1231\">What the model wrote: six lines of Markdown syntax. AI \u003C\u002Fspan>\u003Cstrong>\u003Cspan data-fude-source-start=\"1233\" data-fude-source-end=\"1239\">writes\u003C\u002Fspan>\u003C\u002Fstrong>\u003Cspan data-fude-source-start=\"1241\" data-fude-source-end=\"1294\"> Markdown, and it's also a format it reads very well.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"1299\" data-fude-source-end=\"1326\">A brief history of Markdown\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"1328\" data-fude-source-end=\"1638\">Markdown was created in 2004 by John Gruber and Aaron Swartz with a specific goal: to write text readable by both humans in its raw form and by machines to convert it to HTML. The idea was simple: use existing typographic conventions (asterisks for bold, hashes for headers) rather than inventing a new syntax.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"1640\" data-fude-source-end=\"1889\">In its early days, Markdown was mostly a tool for bloggers. Then GitHub adopted it for README files in 2009, and it exploded. In just a few years, the format became the standard for technical documentation, wikis, product specs, and developer notes.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"1891\" data-fude-source-end=\"2055\">It is this omnipresence in the development ecosystem, billions of documents, comments, issues, pull requests, that laid the foundation for its predominance with AI.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"2060\" data-fude-source-end=\"2095\">Why LLMs naturally adopted Markdown\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"2097\" data-fude-source-end=\"2372\">Large language models were trained on billions of tokens from the web, GitHub repositories, technical documentation, forums like Stack Overflow and Reddit. A massive part of this corpus is written in Markdown. READMEs, wikis, doc articles, product specifications, blog posts.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"2374\" data-fude-source-end=\"2616\">The model didn't \"learn\" to write Markdown via an explicit rule. It internalized it because it's the dominant language in the data it was trained on. When asked to structure an answer, it reproduces the patterns it has seen millions of times.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"2618\" data-fude-source-end=\"2701\">This adoption isn't just a matter of style. It is also deeply linked to efficiency.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"2706\" data-fude-source-end=\"2759\">Token efficiency: the metric that explains everything\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"2761\" data-fude-source-end=\"2966\">A token is not a word. It is a unit of text, roughly 4 characters in English. Each request to an LLM costs tokens for input and output. And every token counts, in terms of cost, latency, and context limit.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"2968\" data-fude-source-end=\"3023\">Let's compare the same formatting in different formats:\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ctable>\n\u003Cthead>\n\u003Ctr>\n\u003Cth align=\"left\">\u003Cspan data-fude-source-start=\"3027\" data-fude-source-end=\"3033\">Format\u003C\u002Fspan>\u003C\u002Fth>\n\u003Cth align=\"left\">\u003Cspan data-fude-source-start=\"3036\" data-fude-source-end=\"3066\">Syntax for \"important\" in bold\u003C\u002Fspan>\u003C\u002Fth>\n\u003Cth align=\"center\">\u003Cspan data-fude-source-start=\"3069\" data-fude-source-end=\"3087\">Approximate tokens\u003C\u002Fspan>\u003C\u002Fth>\n\u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"3116\" data-fude-source-end=\"3124\">Markdown\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Ccode>\u003Cspan data-fude-source-start=\"3127\" data-fude-source-end=\"3142\">**important**\u003C\u002Fspan>\u003C\u002Fcode>\u003C\u002Ftd>\n\u003Ctd align=\"center\">\u003Cspan data-fude-source-start=\"3145\" data-fude-source-end=\"3146\">4\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"3151\" data-fude-source-end=\"3155\">HTML\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Ccode>\u003Cspan data-fude-source-start=\"3158\" data-fude-source-end=\"3186\">&#x3C;strong>important&#x3C;\u002Fstrong>\u003C\u002Fspan>\u003C\u002Fcode>\u003C\u002Ftd>\n\u003Ctd align=\"center\">\u003Cspan data-fude-source-start=\"3189\" data-fude-source-end=\"3191\">10\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"3196\" data-fude-source-end=\"3199\">RTF\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Ccode>\u003Cspan data-fude-source-start=\"3202\" data-fude-source-end=\"3218\">{\\b important}\u003C\u002Fspan>\u003C\u002Fcode>\u003C\u002Ftd>\n\u003Ctd align=\"center\">\u003Cspan data-fude-source-start=\"3221\" data-fude-source-end=\"3222\">7\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"3227\" data-fude-source-end=\"3232\">LaTeX\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Ccode>\u003Cspan data-fude-source-start=\"3235\" data-fude-source-end=\"3255\">\\textbf{important}\u003C\u002Fspan>\u003C\u002Fcode>\u003C\u002Ftd>\n\u003Ctd align=\"center\">\u003Cspan data-fude-source-start=\"3258\" data-fude-source-end=\"3259\">8\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003C\u002Ftbody>\n\u003C\u002Ftable>\n\u003Cp>\u003Cspan data-fude-source-start=\"3263\" data-fude-source-end=\"3396\">This is not insignificant on a long document. Cloudflare explained that switching to Markdown in their LLM pipelines allowed them to \u003C\u002Fspan>\u003Cstrong>\u003Cspan data-fude-source-start=\"3398\" data-fude-source-end=\"3423\">reduce token usage by 80%\u003C\u002Fspan>\u003C\u002Fstrong>\u003Cspan data-fude-source-start=\"3425\" data-fude-source-end=\"3553\">. This figure is often cited, and it illustrates something real: the verbosity of a format has a direct cost in the LLM economy.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"3555\" data-fude-source-end=\"3773\">Markdown is designed to be readable by humans in its raw form. It doesn't have the verbosity of HTML or the complexity of LaTeX. For an LLM that pays for every character in performance and cost, it is the ideal format.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"3778\" data-fude-source-end=\"3828\">Markdown vs other formats: the complete comparison\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"3830\" data-fude-source-end=\"3931\">Here is what it looks like when we compare the main formats on the criteria that matter for AI usage:\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ctable>\n\u003Cthead>\n\u003Ctr>\n\u003Cth align=\"left\">\u003Cspan data-fude-source-start=\"3935\" data-fude-source-end=\"3944\">Criterion\u003C\u002Fspan>\u003C\u002Fth>\n\u003Cth align=\"left\">\u003Cspan data-fude-source-start=\"3947\" data-fude-source-end=\"3955\">Markdown\u003C\u002Fspan>\u003C\u002Fth>\n\u003Cth align=\"left\">\u003Cspan data-fude-source-start=\"3958\" data-fude-source-end=\"3962\">HTML\u003C\u002Fspan>\u003C\u002Fth>\n\u003Cth align=\"left\">\u003Cspan data-fude-source-start=\"3965\" data-fude-source-end=\"3969\">JSON\u003C\u002Fspan>\u003C\u002Fth>\n\u003Cth align=\"left\">\u003Cspan data-fude-source-start=\"3972\" data-fude-source-end=\"3982\">Plain text\u003C\u002Fspan>\u003C\u002Fth>\n\u003Cth align=\"left\">\u003Cspan data-fude-source-start=\"3985\" data-fude-source-end=\"3990\">LaTeX\u003C\u002Fspan>\u003C\u002Fth>\n\u003C\u002Ftr>\n\u003C\u002Fthead>\n\u003Ctbody>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4039\" data-fude-source-end=\"4055\">Token efficiency\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4058\" data-fude-source-end=\"4064\">✅ High\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4067\" data-fude-source-end=\"4072\">❌ Low\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4075\" data-fude-source-end=\"4084\">⚠️ Medium\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4087\" data-fude-source-end=\"4093\">✅ High\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4096\" data-fude-source-end=\"4101\">❌ Low\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4106\" data-fude-source-end=\"4129\">Raw readability (human)\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4132\" data-fude-source-end=\"4143\">✅ Excellent\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4146\" data-fude-source-end=\"4157\">❌ Difficult\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4160\" data-fude-source-end=\"4170\">⚠️ Partial\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4173\" data-fude-source-end=\"4184\">✅ Excellent\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4187\" data-fude-source-end=\"4198\">❌ Technical\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4203\" data-fude-source-end=\"4221\">Semantic structure\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4224\" data-fude-source-end=\"4229\">✅ Yes\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4232\" data-fude-source-end=\"4237\">✅ Yes\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4240\" data-fude-source-end=\"4245\">✅ Yes\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4248\" data-fude-source-end=\"4252\">❌ No\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4255\" data-fude-source-end=\"4260\">✅ Yes\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4265\" data-fude-source-end=\"4288\">Simple visual rendering\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4291\" data-fude-source-end=\"4297\">✅ Easy\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4300\" data-fude-source-end=\"4308\">✅ Native\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4311\" data-fude-source-end=\"4315\">❌ No\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4318\" data-fude-source-end=\"4322\">❌ No\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4325\" data-fude-source-end=\"4335\">⚠️ Complex\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4340\" data-fude-source-end=\"4366\">Present in training corpus\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4369\" data-fude-source-end=\"4378\">✅ Massive\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4381\" data-fude-source-end=\"4390\">✅ Massive\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4393\" data-fude-source-end=\"4403\">⚠️ Partial\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4406\" data-fude-source-end=\"4415\">✅ Massive\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4418\" data-fude-source-end=\"4432\">⚠️ Specialized\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003Ctr>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4437\" data-fude-source-end=\"4456\">Ideal for AI agents\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4459\" data-fude-source-end=\"4464\">✅ Yes\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4467\" data-fude-source-end=\"4476\">❌ Verbose\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4479\" data-fude-source-end=\"4491\">⚠️ Data only\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4494\" data-fude-source-end=\"4508\">❌ Unstructured\u003C\u002Fspan>\u003C\u002Ftd>\n\u003Ctd align=\"left\">\u003Cspan data-fude-source-start=\"4511\" data-fude-source-end=\"4526\">❌ Too technical\u003C\u002Fspan>\u003C\u002Ftd>\n\u003C\u002Ftr>\n\u003C\u002Ftbody>\n\u003C\u002Ftable>\n\u003Cp>\u003Cspan data-fude-source-start=\"4530\" data-fude-source-end=\"4739\">Plain text is compact, but it loses all semantic structure. JSON is structured but hard to read without rendering. HTML is rich but too verbose. Markdown occupies the sweet spot: compact, readable, structured.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"4741\" data-fude-source-end=\"4878\">It is no coincidence that all major models, like ChatGPT, Claude, Gemini, Mistral, use the Markdown format by default in their responses.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"4883\" data-fude-source-end=\"4918\">How Markdown helps AI reason better\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"4920\" data-fude-source-end=\"5078\">There is a lesser-known aspect of the relationship between LLMs and Markdown: the structure is not only useful for output. It influences the reasoning itself.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"5080\" data-fude-source-end=\"5297\">Several studies show that asking an LLM to structure its response, with headers, numbered steps, lists, improves the quality of reasoning. The model, by structuring, breaks down the problem. It is an implicit form of \u003C\u002Fspan>\u003Cem>\u003Cspan data-fude-source-start=\"5298\" data-fude-source-end=\"5314\">chain-of-thought\u003C\u002Fspan>\u003C\u002Fem>\u003Cspan data-fude-source-start=\"5315\" data-fude-source-end=\"5316\">.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"5318\" data-fude-source-end=\"5488\">A well-structured response in Markdown is not only easier to read. It is often more correct, because the model was forced to organize its thoughts into distinct sections.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"5490\" data-fude-source-end=\"5637\">This is also why effective system prompts often use Markdown. We don't just tell the AI to \"be precise\": we give it a skeleton structure to follow.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cpre style=\"background-color:var(--fude-code-background, var(--color-code-bg));color:var(--fude-code-foreground, var(--color-code-text))\" tabindex=\"0\" class=\"shiki fude-code-theme fude-code-block\" data-language=\"markdown\" data-fude-code-block-start=\"5651\" data-fude-code-block-end=\"5845\">\u003Ccode class=\"language-markdown\" data-language=\"markdown\" data-fude-code-block-start=\"5651\" data-fude-code-block-end=\"5845\">\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">## Context\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">You are a software architecture expert.\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">## Expected response format\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">- Start with a 2-3 sentence summary.\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">- Detail the steps in a numbered list.\u003C\u002Fspan>\u003C\u002Fspan>\n\u003Cspan class=\"line\">\u003Cspan style=\"color:var(--fude-code-foreground, var(--color-code-text))\">- End with a ## Key takeaways section.\u003C\u002Fspan>\u003C\u002Fspan>\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>\u003Cspan data-fude-source-start=\"5851\" data-fude-source-end=\"5974\">This kind of prompt works because the model recognizes Markdown patterns and naturally integrates them into its generation.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"5979\" data-fude-source-end=\"6002\">The weak point: reading\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"6004\" data-fude-source-end=\"6047\">Everyone talks about Markdown as a tool to \u003C\u002Fspan>\u003Cem>\u003Cspan data-fude-source-start=\"6048\" data-fude-source-end=\"6053\">write\u003C\u002Fspan>\u003C\u002Fem>\u003Cspan data-fude-source-start=\"6054\" data-fude-source-end=\"6121\"> prompts. To structure context notes. To organize specs sent to AI.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"6123\" data-fude-source-end=\"6158\">This is true, but there is a flaw: \u003C\u002Fspan>\u003Cstrong>\u003Cspan data-fude-source-start=\"6160\" data-fude-source-end=\"6190\">you receive Markdown in return\u003C\u002Fspan>\u003C\u002Fstrong>\u003Cspan data-fude-source-start=\"6192\" data-fude-source-end=\"6304\">, and most AI applications or coding tools don't provide a pleasant and efficient interface to read it properly.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"6306\" data-fude-source-end=\"6517\">You ask Claude to write a well-structured 3,000-word product spec. It delivers it in Markdown. You copy-paste it into a Word document, the formatting goes up in smoke. You open it in a text editor, you read raw \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"6517\" data-fude-source-end=\"6527\">## Title\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"6527\" data-fude-source-end=\"6532\"> and \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"6532\" data-fude-source-end=\"6542\">**bold**\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"6542\" data-fude-source-end=\"6609\">. You email it to a colleague, they don't know what to do with the \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"6609\" data-fude-source-end=\"6614\">.md\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"6614\" data-fude-source-end=\"6620\"> file.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"6622\" data-fude-source-end=\"6690\">This is not a Markdown problem. It's a Markdown file reader problem.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"6692\" data-fude-source-end=\"6881\">Markdown was never designed to be read in its raw form. It was designed to be rendered. And yet, in AI workflows, outputs are copied into tools that lack even very basic Markdown rendering.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"6883\" data-fude-source-end=\"7088\">This is what our Fude app solves: a pleasant and efficient interface to read Markdown. The ability to customize the display, links between files, a table of contents, synchronization across your devices...\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"7093\" data-fude-source-end=\"7129\">Markdown and AI agents: the MCP case\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"7131\" data-fude-source-end=\"7262\">With the rise of AI agents and the MCP (Model Context Protocol), the relationship between Markdown and AI takes on a new dimension.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"7264\" data-fude-source-end=\"7406\">AI agents read your notes to help you work. They analyze your specs to extract tasks. They scan your documents to answer contextual questions.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"7408\" data-fude-source-end=\"7426\">A well-structured \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"7426\" data-fude-source-end=\"7431\">.md\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"7431\" data-fude-source-end=\"7481\"> file is much easier for an agent to parse than a \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"7481\" data-fude-source-end=\"7488\">.docx\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"7488\" data-fude-source-end=\"7494\"> or a \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"7494\" data-fude-source-end=\"7500\">.pdf\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"7500\" data-fude-source-end=\"7528\">. The hierarchy of headers (\u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"7528\" data-fude-source-end=\"7531\">#\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"7531\" data-fude-source-end=\"7533\">, \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"7533\" data-fude-source-end=\"7537\">##\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"7537\" data-fude-source-end=\"7539\">, \u003C\u002Fspan>\u003Ccode>\u003Cspan data-fude-source-start=\"7539\" data-fude-source-end=\"7544\">###\u003C\u002Fspan>\u003C\u002Fcode>\u003Cspan data-fude-source-start=\"7544\" data-fude-source-end=\"7656\">) provides an implicit table of contents. Code blocks are clearly delineated. Lists can be parsed unambiguously.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"7658\" data-fude-source-end=\"7864\">In practice: if you store your meeting notes, your specs, your brainstorms in Markdown files, you give them the ideal structure to be exploited by your AI agents. No need for an additional processing layer.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"7866\" data-fude-source-end=\"8067\">This is the virtuous cycle of Markdown in the AI era: you write in a format the AI understands natively, the AI responds in that same format, and you re-read and reuse this content with the same tools.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"8069\" data-fude-source-end=\"8369\">Fude offers a local MCP server that allows you to connect your notes saved in Fude to any compatible AI: Claude, ChatGPT, Cursor, and many others. The AI agent can read your documents, analyze your writing style, or help you draft new content inspired by your existing notes thanks to our MCP server.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Ch2>\u003Cspan data-fude-source-start=\"8374\" data-fude-source-end=\"8406\">Reading AI properly: Fude's goal\u003C\u002Fspan>\u003C\u002Fh2>\n\u003Cp>\u003Cspan data-fude-source-start=\"8408\" data-fude-source-end=\"8535\">Existing Markdown tools are almost all designed for writing. Obsidian, Notion, Typora, their core value proposition is editing.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"8537\" data-fude-source-end=\"8734\">But when you receive an answer from Claude or a spec generated by Codex, you don't need an editor. You need a reader. Something that renders Markdown cleanly, on all your devices, without friction.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"8736\" data-fude-source-end=\"8920\">That is exactly Fude's goal. Not an editor. A reader, designed so that the rendered Markdown is pleasant to consult, whether it's an AI-generated spec, a doc article, or meeting notes.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"8922\" data-fude-source-end=\"9237\">You connect your existing sources (local files, GitHub, Google Drive), you organize your content into projects, and you read. Cleanly. With Markdown rendering that respects hierarchy, tables, code blocks, and Mermaid diagrams. And with a built-in MCP server, your notes become directly accessible to your AI agents.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"9239\" data-fude-source-end=\"9302\">To understand the vision behind Fude, you can read our article \u003C\u002Fspan>\u003Ca href=\"\u002Fen\u002Fblog\u002Ffude-markdown-reader-ai-era\" data-fude-link-kind=\"unsupported\">\u003Cspan data-fude-source-start=\"9303\" data-fude-source-end=\"9340\">Fude a Markdown reader for the AI era\u003C\u002Fspan>\u003C\u002Fa>\u003Cspan data-fude-source-start=\"9379\" data-fude-source-end=\"9380\">.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Chr>\n\u003Cp>\u003Cspan data-fude-source-start=\"9389\" data-fude-source-end=\"9630\">Markdown is not just another format. It's the language in which AI thinks and expresses itself. If you use LLMs seriously, to write, analyze, specify, document, you are producing dozens of Markdown files every week, often without knowing it.\u003C\u002Fspan>\u003C\u002Fp>\n\u003Cp>\u003Cspan data-fude-source-start=\"9632\" data-fude-source-end=\"9634\">📌 \u003C\u002Fspan>\u003Ca href=\"\u002Fen#download\" data-fude-link-kind=\"unsupported\">\u003Cspan data-fude-source-start=\"9635\" data-fude-source-end=\"9679\">Try Fude, the Markdown reader for the AI era\u003C\u002Fspan>\u003C\u002Fa>\u003C\u002Fp>","\nAsk Claude, ChatGPT, or Gemini to explain a technical concept. Look closely at the answer. There are headers with `##`, text surrounded by **, bulleted lists, and other formatting symbols. You are reading Markdown.\n\nMost people don't realize it. They see well-formatted, clear, spaced-out text. But behind this rendering, there is a syntax that the model produced and that the interface transformed on the fly.\n\nThis is the result of a series of technical decisions, which explain why Markdown became the default output format for all major LLMs, and why a tool designed to read it properly is essential.\n\n## What AI actually produces\n\nWhen you ask an LLM a question, it doesn't generate HTML. It doesn't produce Word, or a PDF. It generates raw text with a lightweight syntax.\n\nThis syntax is Markdown.\n\nHere is what a typical response from Claude looks like in raw text, before rendering:\n\n```markdown\n## Steps to configure your environment\n\n1. Install **Node.js** (version 18 or higher)\n2. Clone the repository with `git clone https:\u002F\u002Fgithub.com\u002F...`\n3. Run `npm install` in the project folder\n\n> **Note:** check your version with `node --version` before continuing.\n```\n\nWhat the model wrote: six lines of Markdown syntax. AI **writes** Markdown, and it's also a format it reads very well.\n\n## A brief history of Markdown\n\nMarkdown was created in 2004 by John Gruber and Aaron Swartz with a specific goal: to write text readable by both humans in its raw form and by machines to convert it to HTML. The idea was simple: use existing typographic conventions (asterisks for bold, hashes for headers) rather than inventing a new syntax.\n\nIn its early days, Markdown was mostly a tool for bloggers. Then GitHub adopted it for README files in 2009, and it exploded. In just a few years, the format became the standard for technical documentation, wikis, product specs, and developer notes.\n\nIt is this omnipresence in the development ecosystem, billions of documents, comments, issues, pull requests, that laid the foundation for its predominance with AI.\n\n## Why LLMs naturally adopted Markdown\n\nLarge language models were trained on billions of tokens from the web, GitHub repositories, technical documentation, forums like Stack Overflow and Reddit. A massive part of this corpus is written in Markdown. READMEs, wikis, doc articles, product specifications, blog posts.\n\nThe model didn't \"learn\" to write Markdown via an explicit rule. It internalized it because it's the dominant language in the data it was trained on. When asked to structure an answer, it reproduces the patterns it has seen millions of times.\n\nThis adoption isn't just a matter of style. It is also deeply linked to efficiency.\n\n## Token efficiency: the metric that explains everything\n\nA token is not a word. It is a unit of text, roughly 4 characters in English. Each request to an LLM costs tokens for input and output. And every token counts, in terms of cost, latency, and context limit.\n\nLet's compare the same formatting in different formats:\n\n| Format | Syntax for \"important\" in bold | Approximate tokens |\n| :--- | :--- | :---: |\n| Markdown | `**important**` | 4 |\n| HTML | `\u003Cstrong>important\u003C\u002Fstrong>` | 10 |\n| RTF | `{\\b important}` | 7 |\n| LaTeX | `\\textbf{important}` | 8 |\n\nThis is not insignificant on a long document. Cloudflare explained that switching to Markdown in their LLM pipelines allowed them to **reduce token usage by 80%**. This figure is often cited, and it illustrates something real: the verbosity of a format has a direct cost in the LLM economy.\n\nMarkdown is designed to be readable by humans in its raw form. It doesn't have the verbosity of HTML or the complexity of LaTeX. For an LLM that pays for every character in performance and cost, it is the ideal format.\n\n## Markdown vs other formats: the complete comparison\n\nHere is what it looks like when we compare the main formats on the criteria that matter for AI usage:\n\n| Criterion | Markdown | HTML | JSON | Plain text | LaTeX |\n| :--- | :--- | :--- | :--- | :--- | :--- |\n| Token efficiency | ✅ High | ❌ Low | ⚠️ Medium | ✅ High | ❌ Low |\n| Raw readability (human) | ✅ Excellent | ❌ Difficult | ⚠️ Partial | ✅ Excellent | ❌ Technical |\n| Semantic structure | ✅ Yes | ✅ Yes | ✅ Yes | ❌ No | ✅ Yes |\n| Simple visual rendering | ✅ Easy | ✅ Native | ❌ No | ❌ No | ⚠️ Complex |\n| Present in training corpus | ✅ Massive | ✅ Massive | ⚠️ Partial | ✅ Massive | ⚠️ Specialized |\n| Ideal for AI agents | ✅ Yes | ❌ Verbose | ⚠️ Data only | ❌ Unstructured | ❌ Too technical |\n\nPlain text is compact, but it loses all semantic structure. JSON is structured but hard to read without rendering. HTML is rich but too verbose. Markdown occupies the sweet spot: compact, readable, structured.\n\nIt is no coincidence that all major models, like ChatGPT, Claude, Gemini, Mistral, use the Markdown format by default in their responses.\n\n## How Markdown helps AI reason better\n\nThere is a lesser-known aspect of the relationship between LLMs and Markdown: the structure is not only useful for output. It influences the reasoning itself.\n\nSeveral studies show that asking an LLM to structure its response, with headers, numbered steps, lists, improves the quality of reasoning. The model, by structuring, breaks down the problem. It is an implicit form of *chain-of-thought*.\n\nA well-structured response in Markdown is not only easier to read. It is often more correct, because the model was forced to organize its thoughts into distinct sections.\n\nThis is also why effective system prompts often use Markdown. We don't just tell the AI to \"be precise\": we give it a skeleton structure to follow.\n\n```markdown\n## Context\nYou are a software architecture expert.\n\n## Expected response format\n- Start with a 2-3 sentence summary.\n- Detail the steps in a numbered list.\n- End with a ## Key takeaways section.\n```\n\nThis kind of prompt works because the model recognizes Markdown patterns and naturally integrates them into its generation.\n\n## The weak point: reading\n\nEveryone talks about Markdown as a tool to *write* prompts. To structure context notes. To organize specs sent to AI.\n\nThis is true, but there is a flaw: **you receive Markdown in return**, and most AI applications or coding tools don't provide a pleasant and efficient interface to read it properly.\n\nYou ask Claude to write a well-structured 3,000-word product spec. It delivers it in Markdown. You copy-paste it into a Word document, the formatting goes up in smoke. You open it in a text editor, you read raw `## Title` and `**bold**`. You email it to a colleague, they don't know what to do with the `.md` file.\n\nThis is not a Markdown problem. It's a Markdown file reader problem.\n\nMarkdown was never designed to be read in its raw form. It was designed to be rendered. And yet, in AI workflows, outputs are copied into tools that lack even very basic Markdown rendering.\n\nThis is what our Fude app solves: a pleasant and efficient interface to read Markdown. The ability to customize the display, links between files, a table of contents, synchronization across your devices...\n\n## Markdown and AI agents: the MCP case\n\nWith the rise of AI agents and the MCP (Model Context Protocol), the relationship between Markdown and AI takes on a new dimension.\n\nAI agents read your notes to help you work. They analyze your specs to extract tasks. They scan your documents to answer contextual questions.\n\nA well-structured `.md` file is much easier for an agent to parse than a `.docx` or a `.pdf`. The hierarchy of headers (`#`, `##`, `###`) provides an implicit table of contents. Code blocks are clearly delineated. Lists can be parsed unambiguously.\n\nIn practice: if you store your meeting notes, your specs, your brainstorms in Markdown files, you give them the ideal structure to be exploited by your AI agents. No need for an additional processing layer.\n\nThis is the virtuous cycle of Markdown in the AI era: you write in a format the AI understands natively, the AI responds in that same format, and you re-read and reuse this content with the same tools.\n\nFude offers a local MCP server that allows you to connect your notes saved in Fude to any compatible AI: Claude, ChatGPT, Cursor, and many others. The AI agent can read your documents, analyze your writing style, or help you draft new content inspired by your existing notes thanks to our MCP server.\n\n## Reading AI properly: Fude's goal\n\nExisting Markdown tools are almost all designed for writing. Obsidian, Notion, Typora, their core value proposition is editing.\n\nBut when you receive an answer from Claude or a spec generated by Codex, you don't need an editor. You need a reader. Something that renders Markdown cleanly, on all your devices, without friction.\n\nThat is exactly Fude's goal. Not an editor. A reader, designed so that the rendered Markdown is pleasant to consult, whether it's an AI-generated spec, a doc article, or meeting notes.\n\nYou connect your existing sources (local files, GitHub, Google Drive), you organize your content into projects, and you read. Cleanly. With Markdown rendering that respects hierarchy, tables, code blocks, and Mermaid diagrams. And with a built-in MCP server, your notes become directly accessible to your AI agents.\n\nTo understand the vision behind Fude, you can read our article [Fude a Markdown reader for the AI era](\u002Fen\u002Fblog\u002Ffude-markdown-reader-ai-era).\n\n* * *\n\nMarkdown is not just another format. It's the language in which AI thinks and expresses itself. If you use LLMs seriously, to write, analyze, specify, document, you are producing dozens of Markdown files every week, often without knowing it.\n\n📌 [Try Fude, the Markdown reader for the AI era](\u002Fen#download)",[584,586],{"locale":585,"slug":579},"en",{"locale":587,"slug":588},"fr","pourquoi-ia-repond-en-markdown",{"title":590,"path":591,"stem":592,"description":593,"children":-1},"How to Add a Horizontal Line in Markdown","\u002Fen\u002Fblog\u002Fhow-to-add-horizontal-line-markdown","en\u002Fblog\u002F3.how-to-add-horizontal-line-markdown","Learn how to add a horizontal line in a Markdown file, avoid common pitfalls and customize its design.",1776593810069]