The Slate Star Codex summaries are available here in EPUB format.
Blog posts, a love letter
My life has been significantly impacted by internet blog posts. Blogs delve deep into difficult subjects and present fresh perspectives, often moving beyond the cliches and biases of mainstream media.
For me, blog posts serve as a serendipity engine, exposing me to new ideas and knowledge domains I wasn't even aware of. No other medium has the same effect, except maybe tweets. Consequently, I make an effort to read as many high-quality blog posts as possible.
However, there's a catch. There are countless blogs out there, and if compiled into books, they'd span numerous volumes with thousands of pages each. While I'd love to read and remember all of it, time simply doesn't allow for that.
The next best option is to have GPT generate summaries of the best blogs and then skim through them for intriguing concepts.
In my view, summaries are not a replacement for the real thing. The more interesting a post is, the less likely its essence can be captured in a few bullet points.
That said, summaries can help build a search engine for blog ideas, enabling you to quickly review main points and determine which ones to examine further. This method can amplify the serendipity engine power of blogs as well as books, tweets, and academic journals.
Tackling a 2.5 million-word blog
Slate Star Codex is one of my all-time favorite blogs. Founded by Scott Alexander, it's a fascinating collection of thought-provoking content that tackles topics like psychology, technology, and ethics. The blog ran from 2013 to 2021, ultimately being replaced by Astral Codex Ten. Slate Star Codex encompasses roughly 2.5 million words, which translates to 3.16 million OpenAI tokens.
Given its scale and the diversity of its posts, Slate Star Codex is an ideal candidate for GPT's summarization abilities. Through summarization, the blog's content was reduced to 323k words or about 13% of the original. While the process can be applied recursively to further reduce the text, doing so wouldn't be ideal for a blog where each post covers a distinct subject.
Summarization challenges
I encountered two main challenges during the summarization task:
Handling GPT's limited context window.
Prompt engineering to make GPT do what I wanted.
Tackling the context window
Since I don't have access to the GPT-4 API with 8k and 32k context windows, I used the gpt-3.5-turbo model with a 4k context window. For longer posts, I divided them into chunks and fed each chunk to GPT for summarization. I experimented with a second pass where GPT would read partial summaries and consolidate them, but it resulted in overly concise summaries that omitted valuable information. Consequently, I decided to simply concatenate the partial summaries.
Prompt engineering
To capture the nuances and various functions of a blog post, I chose a three-part approach:
Key ideas: The main concepts the author explores.
Key learnings: The author's insights and conclusions.
Key questions: Questions the author raises for themselves and their readers.
This was the prompt I used.
This is a portion of a blog post, not the beginning or end. Your response should look like this: ''' Key ideas [Provide a summary in bullet points of the key ideas that are proposed in the post.] Key learnings [Provide a summary in bullet points of the key learnings that are proposed in the post.] Key questions [Provide a summary in bullet points of the key questions the author (Scott) asks himself in the post.] ''' Never step out of this structure! Another GPT instance will use these bullet points to create a more concise summary later. Always refer to the author as Scott! Never refer to him as "the author"! Remember: write only in bullet points! Don't forget! Post:
Although GPT did well with the output structure, it had trouble avoiding phrases like "the author thinks" despite my instructions. Additionally, the "Key questions" section often veers off course, with GPT going into “school quiz mode” and posing questions as if testing the reader's comprehension.
Despite these issues, the final result was quite interesting, and I found it useful for quickly skimming the ideas, learnings, and questions that emerged in the blog.
Comparing GPT-3.5 and GPT-4
An interesting question is how GPT-3 and GPT-4 compare when it comes to the same summarization task. Will GPT-4's enhanced intelligence make a significant difference, or can GPT-3 handle the task just as well?
The main difference I noticed is that GPT-4 does a much better job of identifying and expanding on the questions raised in the post. For instance, consider this post.
Here are the questions identified by GPT-3.5:
Why do some groups see significant changes in happiness while others do not?
What role do intangible factors, such as freedom and stability, play in shaping happiness?
How can societies improve happiness levels for their citizens?
And here’s GPT-4:
Why didn't modern Chinese calibrate themselves to poverty, making sudden wealth seem good?
What's the difference between a Chinese person going from poverty to wealth versus a Syrian going from stability to chaos?
Why did happiness change for African-Americans and women over multi-decade periods, but not for the Chinese?
What are the country-specific factors that might have hindered happiness growth in China despite economic development?
This suggests that asking questions might be a more challenging task than summarizing points made, potentially requiring more context, intelligence, or other factors.
Building the pipeline
The process involved:
Extracting blog posts
Chunking the larger posts
Summarizing the posts with the GPT-3.5 API
Assembling the final result in Markdown and EPUB
The main takeaway is that GitHub Copilot and ChatGPT have significantly increased my coding speed, particularly for quick projects or proofs of concept. Most of the work now involves asking ChatGPT for code, pasting it into my IDE, and using Copilot to refine or expand it. I estimate this has multiplied my productivity between 2x and 5x.
Future prospects
Summarizing more blogs.
Identifying patterns, trends, and insights from analyzed content to develop new ideas, connections, and perspectives.
Leveraging GPT-4's ability to identify relevant questions and research agendas.
Experimenting with larger context windows.
Creating a database of key insights and questions for easier searching and exploring of specific topics.
Developing recommendation engines for blogs based on interests and reading history for connecting readers to new ideas and knowledge.