I recently visited Vancouver, BC, Canada. On the drive there from Kirland, WA, and back, I tested my new car’s self-driving capability (Mercedes EQE SUV 500 offers level 2 for now). For my Mercedes EQE SUV, it means that, once asked to drive on its own, the car can automatically adjust speed based on the traffic conditions, stay in its highway lane even in turns, change lanes to overtake slower cars, observe traffic signs and adjust its understanding of the conditions. For example, the car used cameras to identify temporary speed limit signs in construction areas, instead of only depending on the speed limit information from the navigation data, and automatically adjusted its target speed limit.

Vancouver – Kirkland with AI doing most of the driving.

I had to supervise throughout the trip. Hands had to be on the steering wheel all the time. Nevertheless, the car’s AI enhanced me on the task of driving. I am aware that many other cars offer level 2 (and 3) self-driving today. This was my first time using the feature myself.

The topic of AI in our everyday lives came into mind because of a recent conversation with a colleague. I choose to think of AI technologies as enhancing our human abilities, as helping us in every day tasks, assisting us in accomplishing more… driving, remembering/recalling memories, learning, scheduling, coordinating, and so much more. I don’t think of AI as replacing us or being a threat to us.

The goal of AI as an enhancer of human abilities is what motivates me to work on the future of mixed/augmented/enhanced reality technologies. Today, the car helped me drive from Vancouver back home by sensing its souranding environment and making decisions without me ever feeling that I had lost agency. The car gave me control immediately when I requested it (e.g. pressing the break or the gas pedal, turning the steering wheel).

In a similar way, I would love to have an AI agent that uses sensors (mostly wearables) to sense my environment, activities, situational contexts, other people around me, the conversations in which I participate, and much more in order to help me accomplish tasks, do things better, learn more, be better. From an early point in my work on AI-powered digital personal assistants I thought that my personal assistant needed access to both my digital and my everyday physical contexts to be trully helpful. MR/AR technologies are exactly that… context-generation technologies for our AI-powered enhancers/helpers/assistants/co-pilots/(you can use your favorite term).

Savas Parastatidis

Savas Parastatidis works at Amazon as a Sr. Principal Engineer in Alexa AI'. Previously, he worked at Microsoft where he co-founded Cortana and led the effort as the team's architect. While at Microsoft, Savas also worked on distributed data storage and high-performance data processing technologies. He was involved in various e-Science projects while at Microsoft Research where he also investigated technologies related to knowledge representation & reasoning. Savas also worked on language understanding technologies at Facebook. Prior to joining Microsoft, Savas was a Principal Research Associate at Newcastle University where he undertook research in the areas of distributed, service-oriented computing and e-Science. He was also the Chief Software Architect at the North-East Regional e-Science Centre where he oversaw the architecture and the application of Web Services technologies for a number of large research projects. Savas worked as a Senior Software Engineer for Hewlett Packard where he co-lead the R&D effort for the industry's Web Service transactions service and protocol. You can find out more about Savas at https://savas.me/about

Share
Published by
Savas Parastatidis

Recent Posts

BrainExpanded – The Timeline

See "BrainExpanded - Introduction" for context on this post. Notes and links Over the years,…

2 days ago

BrainExpanded – Introduction

This is the first post, in what I think is going to be a series,…

2 days ago

Digital twin follow up

Back in February, I shared the results of some initial experimentation with a digital twin.…

1 week ago

Digital Twin (my playground)

I am embarking on a side project that involves memory and multimodal understanding for an…

10 months ago

“This is exactly what LLMs are made for”

I was in Toronto, Canada. I'm on the flight back home now. The trip was…

1 year ago

“How we fell out of love with voice assistants”

The BBC article "How we fell out of love with voice assistants" by Katherine Latham…

2 years ago