Blurbs, Summaries, and the Slow Disappearance of the Web

Ryan Bednar7 min read
Blurbs, Summaries, and the Slow Disappearance of the Web

Blurbs, Summaries, and the Slow Disappearance of the Web

For about twenty-five years the internet had a simple structure. Someone wrote something, and if you wanted to read it, you clicked a link. Search engines helped you find things, but they didn't replace them. Google showed you ten blue links. Twitter showed you a link preview. Hacker News showed a headline and a sentence explaining why the link was interesting. The real value lived somewhere else, on the page you clicked.

This system worked because discovery platforms used blurbs rather than answers. A blurb was just enough information to make you curious. A search snippet might show two sentences from an article. A tweet might summarize a post in one line. A Hacker News submission might contain a single sentence explaining the idea. But the blurb was never the thing itself. It was an invitation.

Blurbs were intentionally incomplete. They nudged you toward the source rather than replacing it. If you wanted the tutorial, you clicked the tutorial. If you wanted the argument, you read the essay. If you wanted the discussion, you joined the thread.

That constraint shaped the economics of the web. Discovery systems summarized content, but they couldn't consume it. Traffic still flowed outward. Bloggers got readers. Media companies got page views. Forums grew communities. Even platforms that publishers worried about, especially Google, ultimately drove enormous amounts of visitors to the open web.

Large language models change this structure in a quiet but important way. They replace blurbs with summaries.

A summary is different from a blurb because it can actually substitute for the original. When you ask an LLM a question, it often gives you the essence of the answer directly. Ask how venture capital funds work and it explains management fees, carried interest, and fund cycles. Ask how to fix a Python error and it frequently shows the exact code. Ask for a summary of a research paper and you get a coherent explanation within seconds.

In many cases this answer captures most of the value of the underlying sources. Once you have that, the need to click disappears.

The important shift isn't just that the answer is good. It's that the click is gone. When people interact with a chat interface, they often stay inside it. The model reads many pages, synthesizes the information, and presents the result directly. The user receives the benefit without ever visiting the source.

For most of the web's history the flow looked like this: the user asked a question, the search engine returned links, and the user visited the page. The destination still mattered.

With LLMs the flow increasingly looks different. The user asks a question and the model produces an answer synthesized from many pages. The destination disappears.

This is why publishers are uneasy about the rise of AI summaries. Their entire model depends on traffic. If people stop clicking, the economic foundation of the open web becomes fragile.

Another way to understand this shift is that the web is slowly turning into infrastructure. In the early internet websites were destinations. You visited a blog because the blog was where the knowledge lived. You visited TechCrunch or Slashdot because that was the place the conversation happened.

Now many pages function more like inputs. They are scraped, indexed, embedded, and summarized. Their information is extracted and recombined elsewhere. In effect the web is becoming a massive dataset that other systems read.

You can already see this happening in everyday behavior. Developers who once searched Stack Overflow threads now ask AI coding tools directly. Students summarize textbooks using chat interfaces. Even news consumption is starting to change. Instead of reading five different articles about a story, you can ask a model what happened and receive a synthesized explanation.

This transformation may sound unprecedented, but the web has gone through similar transitions before. Social networks already weakened the open browsing model. When Facebook and Twitter became dominant, discovery moved from search to feeds. Content still existed on external sites, but traffic increasingly depended on algorithms.

Publishers adapted to that shift. They optimized headlines for sharing. They wrote stories designed to travel through feeds. But the links themselves remained central. Social platforms distributed links.

Large language models go one step further. Instead of distributing links, they compress them into answers.

That compression changes the incentives that created the web in the first place. Many of the most useful pages on the internet were written by individuals who enjoyed the steady trickle of readers arriving through search. A developer might write a blog post explaining a tricky bug. A hobbyist might publish a guide to fixing a bicycle derailleur. A researcher might post a detailed explanation of a new idea.

These pages formed the "long tail" of the internet. They weren't famous, but they were incredibly useful. Search engines were good at surfacing them because the engine pointed directly to the page.

If AI systems answer the question without sending visitors to the source, that long tail may weaken. The information still exists, but fewer people experience the reward of readers discovering their work. Over time fewer people may bother writing those explanations.

The internet becomes quieter, even if the knowledge remains.

Not every part of the web shrinks in this scenario. Some things actually become more valuable. Content that contains original experience or firsthand observation is harder to compress. A model can summarize an essay, but it can't replicate the feeling of participating in a community. It can summarize a conversation, but it can't replace the conversation itself.

This suggests that the web may shift toward things that cannot easily be distilled. Communities, discussions, primary reporting, and unique datasets become more important. If you want to understand something new, you still need people close to the source of it.

Another interesting side effect is that writers may start writing with AI readers in mind. For the last fifteen years writers optimized their content for search engines. Articles were structured to satisfy Google's ranking algorithms. Keywords mattered more than clarity.

LLMs change this dynamic because they care more about meaning than keywords. Content that is clear and logically structured is easier for models to interpret and summarize. Ironically this could reward good writing again. Clear explanations become easier for models to incorporate into answers.

Of course whenever someone says the web is disappearing they are usually overstating the case. The internet rarely dies. Instead the interface shifts. Blogs didn't disappear when social media grew. Forums didn't disappear when Reddit grew. They simply became less central.

The same thing may happen here. The open web will still exist, but fewer people will navigate it directly. Instead they will interact through AI systems that read the web on their behalf.

In this sense the web becomes the substrate rather than the interface. The raw material of knowledge remains online, but the layer that humans interact with moves upward.

There is a strange irony in all of this. The open web made large language models possible. These models were trained on enormous amounts of publicly available writing: blogs, forums, documentation pages, research papers, and articles. Without the openness of the web, the training data wouldn't exist.

Now those models sit between users and the sources that created them.

The web taught the models how to answer questions. And now the models answer those questions without necessarily pointing back to the teachers.

What happens next is still unclear. New incentives will emerge. Some publishers may restrict access to their content. Others may focus on experiences and communities that AI cannot easily replicate. Entirely new forms of publishing may appear that assume AI intermediaries from the start.

But one shift already seems clear. The internet we grew up with was built on blurbs that invited curiosity. The internet emerging now runs on summaries that satisfy curiosity immediately.

Blurbs pushed you toward the source.

Summaries make the source optional.

Related Posts