Atomic
Along with Scroll Lo-Fi, we also explored multiple formats we can represent the same article to the end users, and compiled them into Atomic repo and hosted the demos at scroll.in/ai.
It’s a fast-paced fully experimental, disorganised repo containing both the live and the abandoned experiments.
Tech
- Scroll Vector Database for semantic search on latest articles.
- BAML and PydanticAI for agentic framework.
- FastAPI for backend API.
- Vue.js for frontend.
Experiments
Representing a single article in multiple formats
- Detail Slider - Reveal original paragraphs by importance: TL;DR -> Tell me more -> Tell me everything.
- Complexity Slider - 6 levels of complexity: Original -> Beginner -> Semi-familiar -> Aware of topic -> Domain-aware -> Expert.
- Facts - Need to Know, Good to Know.
- Calculator - e.g. calculate personal tax with full breakdown based on the article about new tax rules.
- Mindmap - Knowledge Graph mapping entities and their relationships mentioned in the article.
- Expander - Expand highlighted phrases to reveal more details about them.
- Impact - Decision tree based UI to explore how the news directly or indirectly impacts you.
- Number - Story in numbers - i.e. just the numbers extracted from the article in tabular format.
- FAQs - Nested, expandable, frequently asked questions about the article, with answers.
Formats for cross-article storylines
Learnings
BAML as agentic framework
See Scroll Lo-Fi for learnings on BAML as an agentic framework.
PydanticAI as agentic framework
Reduces a lot of boilerplate when the input type (deps_type) and output type (output_type) are used along with dynamic instructions (@agent.instructions).
Example:
PROMPT = """\
{input.foo}
"""
class Input(TypedDict): # TypedDict is convenient for direct input.
foo: str
class Output(BaseModel): # BaseModel is convenient for validation and parsing.
result: str
agent = Agent(
name="Temporal Events Extractor",
model=provider.gpt_5_1,
output_type=Output,
deps_type=Input,
)
@agent.instructions
def prompt(ctx: RunContext[Input]) -> str:
return PROMPT.format(input=ctx.deps)
output = await agent.run(input={"foo": "bar"})
Dig here for a complete example.
OpenAPI spec driven frontend generation
Being primarily a backend developer, I found it really convenient to generate the frontend directly from the OpenAPI spec of the backend API. This way, I can focus on building the backend logic and let the frontend be generated automatically.
For that, I had to name the API endpoints and their input/output models in a way that makes sense for the frontend.
Example input schema:
class EventsByArticleIDsRequest(BaseModel):
article_ids: list[int]
Example output schema:
class ComplexitySliderResponse(BaseModel):
title: str
heading: str
complexity_0_original_html: str
complexity_1_beginner_html: str
complexity_2_semi_familiar_html: str
complexity_3_aware_of_topic_html: str
complexity_4_domain_aware_html: str
complexity_5_expert_html: str