BrainExpanded – The Timeline

See “BrainExpanded – Introduction” for context on this post.

Notes and links

Over the years, I used various ways to capture personal notes, TODOs, and articles/videos I encountered. I thought that I would go back to them, recall information I captured, and retrieve knowlege from those articles or videos. I used note-capturing apps, bookmarking apps, “save for later” apps, TODO apps, reminder apps. You name it, I probably tried it. After a while, they all ended up being a black hole… information went in but I never actually went back to discover/use it. Perhaps I am not methodical enough, I can’t follow a routine, or their user interfaces just don’t suite my information management style… such tools aren’t becoming a habit for me. They do work for others but not for me.

One of the approaches I used is to send notes to myself via email. For example, I made it very easy to capture articles I encounter via my RSS feeds reader. I wrote an AppleScript for an iOS shortcut called “Note to self”. I added it to iOS’s share sheet. When I encountered an article that looks interesting (most likely because of its title), I share it via “note to self” which sends the link to me via email. I ended up having about 1,000 emails in my mailbox that I never went to revisit.

The timeline

So… I decided that my BrainExpanded project would first tackle my issue with articles first. But first, a trip down memory lane:

Back in the Cortana and Alexa days, I talked about an append-only log, in public presentations, as an evolution of “memory”. I had modeled everything as streams of events: the user’s requests to the assistant and its response; the user’s location; the inferred activities and moods over the day; the notifications they received from apps. We even built a large distributed system, called Reactor, to handle the proliferation of these streams. The idea was to have indexes over the append-only log to support retrieval via Cortana. Continous processing over the append-only log would generate new entries such as daily summaries of activities, monthly summaries, concept-based groupings of memories (e.g., “dining experiences”).

Back to BrainExpanded.

The questions I asked myself were:

How can an assistant, built with today’s technologies, help me keep up with articles I encounter, extract their summaries, organize them into topics/concepts, and then support a language-based interaction model for information recall and exploration? On any day, can I ask for a summary of that day’s articles and have a conversation with my assistant about them?

The first thing I designed and built for BraindExpanded is the “memory” or “timeline”, an evolution of the “append-only log” from my Cortana days. The timeline stores everything that I send to the assistant and all the inferences over that content. For now, it’s just links and notes. Over time, it can be photographs, videos, or even live streaming from my phone’s camera. Ultimately, the entries in the timeline can be from any source. Even further down the line, the timeline can support the continous capturing of egocentric context, as I discussed in the “BrainExpanded – Introduction” post. But that’s probably a challenge for the big companies that are building the necessary hardware (e.g. Meta, Apple, Google/Samsung) and the required multi-modal LLMs.

Effectively, my timeline becomes my assistant’s memory.

No more misunderstandings about who said what in an argument… “No, I didn’t say that”. The memory of a past conversation is always just an assistant request away 🙂

High-level design

So I started building. I currently have an implementation of a file-based and an in-memory timeline. I can add new entries to the timeline. Whenever a new entry is added, an event is raised. AI agents register with the timeline to be notified about new entries (I actually use ReactiveX in Python). The agents may choose to add new entries back into the timeline. For example, a topic extractor processes an article that was just added to the timeline and then adds the extracted topics as a new entry, which is associated with the one that triggered the processing. An AI agent that creates a summary of the article may also be triggered and yet another timeline entry, this time with the summary, is introduced. This way, a graph withe entries as the nodes is starting to form. Multiple agents can be envisaged operating in parallel or in sequence to populate the timeline with information that my assistant can later leverage.

Note that I don’t yet use a graph database but that’s in my plan since I want to support GraphRAG for when I implement the chat-based interaction with my assistant. Perhaps I can pursuade Jim to help out 🙂

The following diagram illustrates the flow. A link is added into the timeline as an event (1). An agent is triggered to retrieve the linked content which is then stored as a new event (3). This new event triggers the “topic extraction” and “summarization” agents. They both generate new events, (5) and (6) respectively, one with the inferred list of topics and the other one with the summary of the content. For now, the content can only be text but it can easily be a photograph or a video as I start experimenting with multi-modal LLMs.

Notice how a graph has started to take shape:

Both the “topic extraction” and “summarization” agents are implemented to use llama3.3 (70B) using Ollama. The context window is small, just 8K tokens. I am planning to experiment with other LLMs such as the llama-gradient (8B and 70B) that has a context window of up to 1M tokens.

Cycles and long processing chains

Care must be taken to not create endless processing loops over the timeline. For example:

An agent subscribes to the timeline’s event stream and publishes the same event type.

A(E) -> E
where "A" is an agent and "E" is an event type

Cycles may also be created through an arbitrary number of agents being invoked sequentially.

A(E1) -> E2
A2(E2) -> E3
...
An(En) -> E1

Or, a really long chain of N agents may be invoked.

A1(E) -> E2
A2(E2) -> E3
...
An(En) -> E(n+1)

In my current implementation, I have a single “dispatcher agent” that orchestrates the other ones to ensure that no such cycles or long processing chains are created. Agent-orchestration logic is one of the areas I will explore.

Look… it’s working

For a quick test, I used the in-memory timeline implementation.

Here’s an entry being added to the timeline:

sample_entry = TimelineEntry()
sample_entry.author = "user"
sample_entry.content = {
    "content": "https://savas.me/2024/12/20/brainexpanded-introduction/"
}
timeline.add(sample_entry)

The above generates the following entry in the timeline:

{
    "id": "f34d2398b0c2455c9a730945d33c07a5",
    "timestamp": 1734687713827134000,
    "content": {
        "content": "Why neurosymbolic AI could be the next big thing\r\nhttps://fortune.com/2024/12/09/neurosymbolic-ai-deep-learning-symbolic-reasoning-reliability/"
        },
    "author": "user",
    "associated_with": []
}

Ignore the double “content:content” for now. There is a reason for it. We will discuss it in a different post.

Notice the timestamp. This will allow me to generate a temporal index of the entries in the timeline. Also, note that the entries are immutable. Once one is added, it can’t be modified. This allows me to reproduce the memory’s construction over time. If the implementation of the agents evolve or new ones get added, I can rerun them over all or parts of the memory. The “author” field captures the source of the event.

The agents generate similar entries:

Content retrieval” agent

{
    "id": "66d18bfce8f64b8eafcd65eb133036dd",
    "timestamp": 1734687288085462000,
    "content":{
        "content": "BrainExpanded - Introduction - savas parastatidis ...
        [[[[snip]]]]
    "},
    "author": "content_agent@1.0.0",
    "associated_with": ["14ffa75f973b49338daa622d3bd90c75"]
}

“Topics extraction” agent

{
    "id": "cbfe2b0233f24b1c933ef7410485619b",
    "timestamp": 1734695659837218000,
    "content": {
        "topics": [
            "Artificial Intelligence",
            "AR/MR",
            "Coding",
            "Cortana",
            "Personal Assistants",
            "Programming",
            "BrainExpanded",
            "Alexa",
            "Meta AI",
            "Digital twin"
        ]
    },
    "author": "topics_agent@1.0.0",
    "associated_with": [
        "6a57467e0b3249178b3ccd67cac4db0d"
    ]
}

Not the best list of topics. I haven’t iterated yet on the prompt so I can probably get this agent to produce better results. Also, the text extraction from the web page isn’t the best at this stage.

“Summarization” agent

{
    "id": "d7b60557fbbb4a3b9dea6e3358e718e6",
    "timestamp": 1734695733056087000,
    "content": {
        "summary": "The article introduces the author\'s side project, 'BrainExpanded', which aims to build an assistant-like experience using artificial intelligence (AI) and augmented reality (AR) technologies. The author, Savas Parastatidis, who has previously worked on projects like Cortana and Alexa, wants to create a digital assistant that can enhance human abilities by providing access to more egocentric context, both physical and digital.\n\nThe author draws inspiration from science fiction movies, such as "Her", where the protagonist\"s digital assistant can see and understand the world in the same way as a human. With the emergence of large language models (LLMs) and improvements in hardware for devices that capture egocentric context, such as smart or AR glasses, this experience is becoming possible.\n\nThe author plans to document the system design and implementation learnings of BrainExpanded, which will involve continuous sensing and memory capabilities to store and retrieve user context. The goal is to enable experiences like those seen in sci-fi movies, where a digital assistant can recall information and help users remember things they didn\"t explicitly record.\n\nThe article sets the stage for a series of posts that will explore the development of BrainExpanded, with the next post expected to dive deeper into the project\"s journey."
    },
    "author": "summary_agent@1.0.0",
    "associated_with": [
        "6a57467e0b3249178b3ccd67cac4db0d"
    ]
}

An ok summary. This agent can probably do better.

What’s next

As a next step, I am going to create a more permanent repository of timeline entries and start experimenting with (Graph)RAG in support of a chat experience over the content. I am thinking of using the file-based timeline implementation to monitor a folder in iCloud. The idea is to use iCloud as the backend storage solution. I can then add new entries on my phone directly as iCloud files. Those will eventually synchronize across to my laptop (or some other computer) where my timeline-processing process runs. The new entry will be detected and an event will be triggered for the agents to process. An agent can also populate a graph store that will act as the RAG source.

As I continue to make progress, I will keep reporting findings/learnings.

Leave a Reply

Your email address will not be published. Required fields are marked *