cvoya

Reflecting on 2025: Building CVOYA’s Future with AI Coding Agents

As 2025 is now behind us, I wanted to share a few reflections from my CVOYA journey over the last several months.

This voyage started with a small set of big questions — the kind that are easy to daydream about, but harder to commit to building.

  • Can there be a truly useful AI companion that remembers and learns from everything in a user’s life — and incrementally becomes more helpful over time?
  • Can there be a single memory system under the user’s control, instead of a fragmented set of “memories” spread across apps, services, and agents?
  • Once memories are captured, can we support a growing set of installable AI agents that continuously process them — organizing, augmenting, summarizing, enriching, and extracting value indefinitely?
  • Can the non-stop computation be hosted on a home appliance when appropriate, avoiding unnecessary cloud costs?
  • And can all of this be done with provable trustworthiness — with privacy and user ownership as non-negotiable principles?

Those questions shaped almost everything I worked on this year.


2025 milestones and achievements

The questions above inspired me to experiment aggressively and learn fast. I wrote about some of my earlier thinking and prototypes under the category “BrainExpanded,” mostly as a way to clarify ideas and document what I was learning.

At the same time, I got exposed to a new style of execution: building a product end-to-end with the help of AI coding agents.

That approach has been surprisingly powerful. It keeps me close to the technology — the APIs, the apps, the infrastructure, the data model, the edge cases — while still leaving me enough room to think about product, direction, and the bigger picture. It has been extremely rewarding (and honestly, a lot of fun). And it has me thinking more seriously now about recruiting key partners and, eventually, hiring.

Key accomplishments in 2025:

  • Research, experimentation, and learning around the core ideas.
  • Planning and execution toward those ideas using AI coding agents.
  • iOS, Android, and macOS apps built; services deployed to Azure and also to local compute appliances — end-to-end working.
  • CVOYA, LLC formally formed.

What comes next

As exciting as building has been, the priorities ahead feel even clearer:

  • Nail the product. The highest priority is establishing the core user value proposition — and staying flexible enough to pivot along the way.
  • Bring in key partners. I’m actively working on identifying the right people to join the journey, and I hope to share news on this soon.
  • Strengthen product leadership. Product is not my strongest area, and I want someone who can drive it with clarity and discipline.
  • Start testing with real users. Faster iteration is great — but nothing beats feedback from real usage.
  • Reduce cloud cost pressure. I’m considering joining a startup program partly to offset hosting costs while we iterate.
  • Think carefully about funding. Whether VC funding is necessary (and when) is still an open question.
  • Use the companion to build the companion. One of the best ways to test what I’m building is to rely on it more directly as I make these decisions.

Working with AI coding agents

Throughout my career, I’ve believed technology should amplify human abilities. I’ve also been fortunate to learn from great technical, product, and business leaders — and to work in roles that let me go deep into systems while still seeing the full stack behind successful products.

That background has shaped how I use AI coding agents today.

My early experiments started in Python, where agents helped me write small scripts quickly. But as the idea grew into an end-to-end system, I moved to .NET 10. I knew the codebase would expand, and I wanted a production-quality foundation with a toolchain and frameworks I already knew well.

At first, I defined the APIs and architecture, wrote most of the code myself, and used agents for targeted tasks — especially tests. Over time, I became more comfortable delegating more complex work, including a .NET LINQ-to-Cypher translation layer (now open-sourced as part of the GraphModel project).

As the tooling and models behind these agents evolved rapidly, the way I worked evolved too. I went from asking for snippets to delegating entire slices of functionality — spanning services, apps, and deployment infrastructure — in a single push.

One interesting side effect: my GitHub activity can look less busy even when I’m moving faster. That’s because instead of many small commits, I now often ask agents to complete a larger chunk of work in one go — and the resulting commits contain more cross-cutting change.

The current flow looks something like this:

  1. I discuss a feature with the agents and we co-create a plan.
  2. I review the plan carefully, adjust it, and tighten requirements.
  3. I ask them to implement it.
  4. We iterate to fix issues and refine quality.

I usually have multiple agents working in parallel. In practice, it often feels like working with a small team of junior developers who can move fast with detailed instructions — but still require oversight, review, and direction.


Observations

A few things have stood out to me this year:

  • I genuinely enjoy this way of building. I feel incredibly productive, and I love that I can stay close to the technology while still spending time on architecture, product direction, and the bigger questions. The agents amplify my ability to execute, even though they still require supervision and good judgment.
  • Knowing what to ask matters more than I expected. For me, these tools work because I know how to reason about abstractions, patterns, and tradeoffs — and I can translate that into precise asks. The better the prompt, the better the output. In many ways, the skill ceiling is not “coding,” but “directing.” An understanding of the technologies involved isn’t optional.
  • Team dynamics will be interesting. Right now, I’m working alone across the entire codebase, with agents as force multipliers. When more humans join, the cadence may change — especially if pull requests are large and cross-cutting. I’m not yet convinced that “AI reviewing AI” is the right answer. I’ll need to adapt how work is scoped so that the speed stays high without losing quality.
  • The industry needs to rethink mentorship. If AI agents become a standard way of building, how do we train the next generation of experts? I was incredibly fortunate to have mentors who shaped how I think about production software and real-world systems. But junior engineers who rely heavily on AI early might not experience the same learning curve — and they might not yet know what to ask for, or what “good” looks like. We’ll need new models for mentorship that reflect this new reality.

Conclusion

In many ways, 2025 was a year of experimentation and foundation-building for CVOYA.

I’m still asking those big questions — but I’m increasingly focused on grounding them in a clear value proposition and real user feedback. I’m excited about what’s possible, while staying realistic about what it takes to turn a vision into a product that people truly rely on.

It’s been an incredible and fulfilling journey so far — and it feels like only the beginning.

Savas Parastatidis

Savas Parastatidis works at Amazon as a Sr. Principal Engineer in Alexa AI'. Previously, he worked at Microsoft where he co-founded Cortana and led the effort as the team's architect. While at Microsoft, Savas also worked on distributed data storage and high-performance data processing technologies. He was involved in various e-Science projects while at Microsoft Research where he also investigated technologies related to knowledge representation & reasoning. Savas also worked on language understanding technologies at Facebook. Prior to joining Microsoft, Savas was a Principal Research Associate at Newcastle University where he undertook research in the areas of distributed, service-oriented computing and e-Science. He was also the Chief Software Architect at the North-East Regional e-Science Centre where he oversaw the architecture and the application of Web Services technologies for a number of large research projects. Savas worked as a Senior Software Engineer for Hewlett Packard where he co-lead the R&D effort for the industry's Web Service transactions service and protocol. You can find out more about Savas at https://savas.me/about

Recent Posts

DIY smart home accessory – It all started with a question to ChatGPT

Few months ago, we bought a sculpture from a local art fair for our Palm…

3 weeks ago

The Beginning of CVOYA

There’s a unique energy that comes with starting something new — a blend of excitement,…

3 months ago

Enhancements in Graph Model: Dynamic Entities & Full-Text Search

As I continued work on BrainExpanded and its MCP service, I came to realize that…

6 months ago

GraphModel: A .NET Abstraction for Graphs

Just over a month ago, I published "Playing with graphs and Neo4j". Back then, it…

7 months ago

Playing with graphs and neo4j

After my initial implementation of some BrainExpanded-related ideas on top of dgraph using its GraphQL…

8 months ago

A Graph Model DSL

Say hello to the Graph Model Domain Specific Language (GMDSL), created with the help of…

8 months ago