• TeamViewer’s Session Insights tool uses generative AI to summarize remote IT-support sessions.
  • The company, based in Germany, must ensure its AI features are secure, private, and GDPR-compliant.
  • This article is part of “CXO AI Playbook” — straight talk from business leaders on how they’re testing and using AI.

TeamViewer, a software-development company founded in 2005, is among the many businesses leaning into artificial intelligence.

Its AI technology has evolved, allowing clients to remotely access, control, monitor, and repair devices such as laptops, smartphones, tablets, and augmented-reality headsets. Clients use this to power IT support in areas such as customer service, industrial manufacturing operations, and inspections of specialized machinery.

Mei Dent, TeamViewer’s chief product and technology officer, joined the company last year. She oversaw the deployment of its first generative-AI product, Session Insights, which uses generative AI to summarize remote support sessions. The product launched at the end of October.

Dent told Business Insider that collaboration with internal teams was crucial to implementing the technology across the company. She described the race for the best AI model as “heated” and said TeamViewer worked with third-party partners to leverage their strengths.

The following has been edited for clarity and length.

How does AI fit into your day-to-day workflow?

My primary focus is building products like our remote connectivity solutions and Frontline products.

AI is integrated across the company, not just in our customer-facing products but also for internal use in areas like research or code reviews.

We have a strong data foundation, emphasizing data policies and governance, especially due to GDPR concerns in Germany, where TeamViewer is based. Internal AI usage is governed by policies to protect confidential data, which also influences how we implement AI at the product level.

Does TeamViewer place any one person in charge of the AI vision, or is the approach collaborative?

It’s a collaborative effort. Different departments use AI in ways that suit their specific needs. The marketing, sales, and development teams have different goals.

Our legal and data teams provide overarching guidelines, but the adoption isn’t top-down.

TeamViewer just launched its first product powered by generative AI. Can you elaborate on it?

We launched Session Insights, an add-on for our TeamViewer Remote and Tensor products. It uses generative AI to capture and summarize support sessions in a text-based log.

Typically, support agents manually write up what occurred during a TeamViewer session, which can be time-consuming. Session Insights automates the process, generating detailed summaries that save time and improve accuracy.

It also provides analytics to identify common support issues and help build knowledge bases for future use.

How do you articulate the value generative AI brings? What are the specific metrics you’re looking at?

For Session Insights, we worked with our customers in beta to make sure we understood the business value and return on investment.

At this point, I think any company can leverage a language model and see some “wow” factor. But we need to clearly articulate the business value.

For a client providing support through TeamViewer, it means productivity gains. Can you take on more tickets? How much time can you save by not writing a summary?

For the person receiving support, it means time to resolution: How long does it take for the person receiving support to be satisfied?

You mentioned GDPR. How does TeamViewer ensure security and privacy when using generative AI in a product?

We use a multitenant cloud. AI processing is confined within each tenant’s environment, and administrators have full control over which sessions can use AI features. Customers can decide where and how their content is stored, ensuring data isn’t intermingled.

Users also need to be informed, so a notification lets them know when a session is being recorded or processed by AI.

And we fully encrypt content and implement data anonymization before any AI processing to protect personally identifiable information, even within the same company.

Does the company work with third-party partners to create new AI features, and if so, which companies?

We don’t develop our own AI models. We have a multipartner approach, working with partners like Google, OpenAI, Microsoft Azure, Meta, and so on. We evaluate which model is the best for quality and which is the most economical.

Share.
Exit mobile version