The average person spends about 85,000 hours of their life at work, much of which is spent working alongside colleagues. With so many daily interactions, it’s no surprise that issues arise in these relationships. And, as many workers don’t have the skills or confidence to address these tensions, they can fester away, which impacts the individuals involved, their teammates, and the wider organization.
With the rapid developments in technology, and the launch of new AI tools, how will this impact workplace relationships? What are the opportunities, and what are the risks? Can AI help workers resolve disputes, which will alleviate the often significant personal and business impacts of workplace conflict?
The issue of bias
In trying to answer this, the first consideration is inherent bias. As humans, our biases and expectations shape our approach to conflict. Often learned in childhood, this pre-programming influences our perceptions and interpretations of what another person’s words or actions mean.
AI algorithms are widely recognized to reflect the biases of the humans that have programmed them. In DataRobot’s State of AI Bias Report, 43% of senior leaders said their organizations had lost employees due to bias in algorithms. With the likely inherent bias in AI, there’s much room for AI to return incorrect, inaccurate, or potentially damaging guidance, which can look credible to a layperson seeking help with a sensitive conflict issue.
Are AI’s biases easier to correct than those entrenched in our psychology? In the future, could AI approach a situation without bias, and be able to see different perspectives, be open to a range of possibilities, and analyze these with impartiality?
How we feel about AI solving our conflict
It is also important to look at the emotional context. When people are in conflict, they have a core need to be heard, to have their feelings recognized, and to feel validated. When having a difficult conversation, these emotions are conveyed in words, body language, tone of voice, and pacing. When ‘conversing’ with AI, workers will unlikely get the emotional recognition they need.
So far, much of the debate around AI has focused on the answers it delivers rather than how AI makes us feel about those answers, and, indeed about ourselves. Studies have shown that people react differently when decisions are delivered by AI versus humans, perceiving AI to be less moral or fair than a human. For example, research found that when applicants to a select expert panel were told that AI was evaluating them, the successful applicants felt less positive about the research company than those told a human was evaluating them.
However, an area where AI could have strengths is that it doesn’t have to regulate its emotional response in a conflict situation. In conflict, managing emotions is key to being able to respond constructively rather than react emotionally. AI doesn’t face this internal management challenge, and could analyze the information presented objectively, albeit it’s still dependent on the quality and bias of its inputs. As AI progresses, and word analysis is combined with facial and body language recognition, AI could more accurately recognize the emotions in play. But will we learn to feel the same way about its outputs as we would those of a human?
Conflict is complex
Relationships are complicated, particularly those experiencing difficulties. People have different personalities, experiences, and ways to express themselves. There are often other people involved and other complex factors at play within the wider organizational culture.
On the face of it, AI has obvious strengths because it can take on board and analyze vast amounts of complex information. But can AI be programmed with all the complex information relating to a conflict? If one person looks for an ‘answer’ to a relationship breakdown, they input only their perspective, missing out on all the valuable insights from others involved or the broader perceptions of the surrounding team and organization. AI won’t get the nuances of how people shift their perspectives as they learn other parties’ viewpoints and experience the reactions of others.
Certainly, AI and our relationship with it are evolving all the time. Machine learning can get better at reducing bias, we can get better at understanding how AI makes us feel, and algorithms could be programmed to factor in the complexity of conflict within organizations. But AI is unlikely to ever be the best solution for resolving conflict. You are the only one who knows what’s best for your situation. Relying on AI reduces much of the opportunity for human-to-human conflict resolution. The process of discovering other people’s perspectives is enriching. It increases self-awareness, helps make the real human connections everyone needs in a technology-led world, and strengthens relationships in the place where we spend around a quarter of our lives. While we don’t have all the answers on AI, when it comes to conflict, you do have the answer, and it lies within.
“Click here” to view the original article or “click here” to view a PDF of the article