Blog writer
Breaking down complex tasks into smaller, discrete steps is one of the best ways to improve the quality of LLM outputs. The blog writer workflow example does this by following the same approach a human would take to write a blog post: first conducting research, then generating a first draft, and finally editing that draft.
Workflow
The Blog Writer workflow consists of the following steps:
- Parallel research phase:
- Brainstorm topics using LLM (
<LLMResearchBrainstorm>) - Research each topic in detail (
<LLMResearch>) - Gather web research (
<WebResearcher>)
- Brainstorm topics using LLM (
- Write initial draft based on research (
<LLMWriter>) - Edit and polish the content (
<LLMEditor>)
Running the example
# Install dependenciespnpm install
# Set your OpenAI API keyexport OPENAI_API_KEY=<your_api_key>
# Run the examplepnpm run startKey patterns
Running components in parallel
The <ParallelResearch> component runs both the LLMResearchBrainstorm and WebResearcher components in parallel. Both sub-workflows return an array of strings which are automatically combined into a single array.
const ParallelResearch = gsx.Component< ParallelResearchComponentProps, ParallelResearchOutput>("ParallelResearch", ({ prompt }) => { return ( <> <LLMResearchBrainstorm prompt={prompt}> {({ topics }) => { return topics.map((topic) => <LLMResearch topic={topic} />); }} </LLMResearchBrainstorm> <WebResearcher prompt={prompt} /> </> );});Streaming Output
The workflow streams back the final output to reduce the time that it takes for the user to receive the first token. The <LLMEditor> component is a StreamComponent and the <ChatCompletion> component has stream={true}.
const LLMEditor = gsx.StreamComponent<LLMEditorProps>( "LLMEditor", ({ draft }) => { return ( <ChatCompletion stream={true} model="gpt-4o-mini" temperature={0} messages={[ { role: "system", content: systemPrompt }, { role: "user", content: draft }, ]} /> ); },);Then when the component is invoked, stream={true} is passed to the component so that the output is streamed back and can be surfaced to the user:
<LLMEditor draft={draft} stream={true} />Additional resources
Check out the other examples in the GenSX Github Repo.