Making it personal: How AI work assistants might outperform human ones

Sumeet Sobti

Engineering

Set your expectations high!

A busy executive shows up for work one morning in a hurry. Without warning, she asks her assistant for “those slides,” rushes into her office, and shuts the door. But wait, thinks the assistant, which slides? 

His boss didn’t tell him exactly what she was looking for, but, as a good assistant, he should have some idea of what she could be after, even without any context. If he can’t guess which presentation she had in mind, then he should at least be able to come up with a handful of possibilities.

He starts thinking. Could she want the slides for one of the meetings she’s going to that day? He looks up her calendar. She has her monthly meeting with the Research and Development team at 11am, and the R&D director is going to present some slides. That could be it. At 1pm, she has an investor meeting, where she’s going to need the pitch deck. That seems like a plausible candidate. But she also has a sales meeting later that afternoon, and she’s been collaborating with her product manager on a sales presentation. That might be it, too. Given all he knows about his boss’ context and needs, he is able to come up with a few reasonable possibilities for which slides “those slides” were.

This is just a regular day in the life of a good “human” assistant. They have to know the work-lives of their bosses, their bosses’ preferences, and their bosses’ ways of asking for things. They’re also expected to understand poorly articulated requests and anticipate needs before those needs are explicitly made known.

So, if we already require this level of performance from human assistants, then why should our expectations for an AI-based work assistant be any lower? If anything, there are limits to how much a human assistant can anticipate his boss’ needs—limits by which AI work assistants aren’t necessarily constrained.

A human assistant, for example, is not going to know that his boss was watching an onboarding presentation for newly-hired engineers last night and that she’s now looking for the slides from that presentation. Or that, earlier this morning, in a private Slack message, she promised a colleague to review their slides, and that the colleague just shared those slides with her. Recent breakthroughs in BERT (Bidirectional Encoder Representations from Transformers) and other deep learning technologies make it possible to understand the contents of Slack messages and documents to a level where AI work assistants start to exhibit superhuman capabilities. This isn't just a future aspiration; companies like Glean are doing this right now.

A personal assistant for each person in the enterprise

AI can deliver a personal assistant not just for a company’s executives, but for every employee in every role. If a designer in the Palo Alto office and an engineer in the Bangalore office both ask the system to show them “those slides” at the same time, then the AI should understand enough about the context of those two requests to present the designer and the engineer with slide decks that are relevant to each of them individually.

Cases where personalization makes a significant difference aren’t hard to find. Suppose you are a new employee starting out at a company. When you ask your AI work assistant to help with “onboarding,” you expect the assistant to understand that you are filling a particular role on a particular team in a particular department. You want the assistant to find you documents that are appropriate for your situation. As an engineer, you might have access to onboarding documents from several engineering teams. You might even have access to onboarding documents from other departments, like product and sales. But it would not be appropriate for the assistant to present you with a random selection of all the onboarding documents you have access to. It should use reasonable judgement to show you things you might actually need. At the same time, it should be smart enough to understand your intent if you ask for “sales onboarding documents” instead.

Large companies with office locations around the world offer unique challenges and opportunities for personalization. An employee in the Ireland office might search for “employee benefits” and expect to find documents and information that are completely different from those expected by an employee in the Beijing office. Similarly, a U.S. employee searching for a “holiday schedule” might expect to see documents related to North American holidays ranked higher than those relating to holidays elsewhere.

No two people do the same work, have the same needs, and discover the same resources at the same time, so an AI work assistant that can’t tailor its behavior to each user’s needs is a nonstarter. Even for just one user, interactions with an assistant are expected to evolve over time. If you get back from a week of vacation and ask the assistant what you missed, then the answer should be personalized—not just for you, but for you and your return from this particular vacation at this exact time.

Personalization, therefore, can’t be an afterthought in the design of an AI work assistant. It has to be a core component of the system. We at Glean have embraced this insight from the very beginning and have found it to be the key to providing the most relevant results to our users.