CNN
—
Last month, major companies such as Walmart and Starbucks, delta and Chevron used AI to monitor employee communications. Reaction was immediate online, with employees and workplace advocates concerned about the loss of privacy.
But experts say that while AI tools may be new, monitoring, reading and tracking employee conversations is anything but. While AI may be more efficient in that regard, and the technology may pose new ethical and legal challenges and risks alienating employees, the reality is that Conversations at work were never private anyway.
“Monitoring employee communications is not new, but the sophistication of analysis enabled by continued advances in AI is increasing,” said David Johnson, principal analyst at Forrester Research. Ta.
“What is also evolving is the industry’s understanding of how such surveillance impacts employee behavior and morale in different contexts, and policies for acceptable use within the workplace. and boundaries.”
A recent study by Qualtrics, a company that uses AI to filter employee engagement surveys, found that while managers are bullish about AI software, employees are nervous, with 46% saying they're worried about their workplace. says that the use of AI in Japan is “scary.”
“Trust is lost and regained in buckets, so failures in early technology adoption will have a long tail in employee trust over time,” Johnson said. He said the future of AI-powered employee monitoring is “inevitable.” ”
One company bringing AI to popular work-related software, including Slack, Zoom, Microsoft Teams, and Meta's Workplace platform, is seven-year-old startup Aware.
Aware is working with companies like Starbucks, Chevron, and Walmart. The company says its products are aimed at detecting everything from bullying and harassment to cyberattacks and insider trading.
Aware said the data remains anonymous until the technology finds the requested instance to highlight. Any issues will be reported to Human Resources, IT, or Legal for further investigation.
A Chevron spokesperson told CNN that the company uses Aware to monitor public comments and interactions on the company's Workplace platform, where employees can post updates and comments.
Meanwhile, a Starbucks spokesperson said the company uses the technology to improve the employee experience, including monitoring trends and feedback on internal social platforms.
Walmart told CNN it uses software to protect its online internal community from threats and other inappropriate behavior and to track employee trends.
Delta Air Lines said it uses the software to manage its internal social platforms, monitor trends and sentiment on a daily basis, and maintain records for legal purposes.
Other monitoring services also exist. His cybersecurity company Proofpoint uses similar technology to detect cyber threats such as receiving phishing scams and whether employees are downloading and sending sensitive work data to their personal email accounts. Monitoring risks. (Disclosure: CNN's parent company, Warner Bros. Discovery, is a subscriber.)
Proofpoint, used by many Fortune 100 companies, recently introduced a new feature that restricts the use of AI tools like ChatGPT on company systems if it violates company policy. This will ensure that employees do not share sensitive company data with AI models, but future actions may cause this issue to reoccur.
Still, introducing AI into the workplace raises concerns that employees may feel like they are being watched.
Reece Hayden, senior analyst at ABI Research, said it was natural for some workers to feel a “Big Brother effect”.
“This can impact their willingness to message and speak openly with colleagues via internal messaging services like Microsoft Teams,” he said.
Social media platforms have long used similar techniques. For example, Meta uses a content management team and related technology to manage abuse and conduct on our platform. (In fact, Mehta has recently been heavily criticized for allegations of improper moderation, particularly regarding child sexual abuse.)
At the same time, employee behavior has been monitored on business systems since the early days of email. Even when employees are not on a secure work network, businesses can monitor their activity through their browsers. (However, Aware only works with corporate communications services and not with browsers.)
“Trying to understand employee patterns is not a new concept,” Hayden said, noting that companies track things like logon times and meeting attendance.
But what is changing in this process is applying more advanced AI tools directly to employee workflows. AI software allows companies to quickly analyze thousands of data points and keywords to gain insights into trends and what employees are talking about in real time.
Hayden said companies may want to track employee conversations not because they care about their weekend plans or the latest Netflix they're watching.
“This will give us more detailed real-time insights into our workforce,” Hayden said.
He added that this will allow companies to better develop internal messages, policies and strategies based on what the software learns about their employees.
While the rise of AI in the workplace may raise legal and ethical challenges in addition to questions around accuracy and relevance, Forrester Research's Johnson believes the biggest challenges ahead lie in both the short and long term. He said that he believes that it is important to earn the trust of employees in both ways.
Simply put, people don't want to feel like they're being watched.
He said organizations need to be careful about how they leverage technology. If companies use it to judge employee productivity, or if dissatisfied employees are followed by disciplinary action or termination, it can take years before employees trust the company again. It may take some time.
“It's very important to be cautious and cautious” when using this technology, he said.