Technology is built by humans, for humans.

This is a probably obvious, yet often forgotten truth. If you want to be successful at pioneering AI into an organization, you don’t have the luxury of forgetting it.

During my work, I often interact with managers running companies that are dealing with their first AI projects. What I realised is that understanding the emotional implications of their new journey is sometimes even more important than the technology itself.

Trying unbeaten paths is always challenging, even if AI isn’t involved. Especially if you’re a seasoned manager that has proven her worth hundreds of times, and suddenly you’re playing a game where you have as much experience as your interns. AI adds another level of complexity. When you’re trying to embed AI in your organization and use it for decision making, you’re to some extent giving up the helm to a piece of software. To make things worse, you don’t fully understand the technology, it’s supposedly smarter than you and maybe it’s even a black box that doesn’t want to share its secrets!

I’ve observed that the combination of doing something new, not fully understanding the technology and the hype around AI causes two main reactions in decision-makers:

  1. Overconfidence and excessive trust in the technology. This is the mental pattern: “if AI is so powerful, it can learn from all our data, how can we fail?”. The risk with this approach is that AI becomes a silver bullet, unrealistic expectations are set and setbacks are behind the corner.
  2. Anxiety and need to control. This is a consequence of either lack of trust for AI, or of the fear of becoming irrelevant, with our experience trumped by gigabytes of data and our skills replaced by algorithms. People experiencing this often become controlling and pretend to push their decisions into the technical development of the AI project, like a pitch invasion in the data scientists’ field.

If you’re working on an AI project in a traditional organization, chances are that either you or someone you’re working with fall into one of these two buckets. From now on I’ll focus on the second case: you need to help someone overcome its emotional burdens and be more effective in managing AI. If you’re the person in need, the tips still apply.

I’ve found three different measures that seem to be effective:

  1. Empower people to make use of their strengths. Someone with 20 years of experience in an industry can be extremely valuable in an AI project, even if he has zero technical knowledge. The key catch is that he needs to contribute with his strengths (domain knowledge), rather than with his weaknesses (tech expertise). Domain knowledge is extremely useful in four steps of an AI project:
  • Identifying a clear value proposition
  • Evaluating business risks
  • Understanding how to embed the technology in the current status quo
  • Evaluating performance and debugging
  1. People fear what they don’t know, so cultivate decision makers’ AI education. We don’t need to transform organizations into an army of data scientists, but everyone should have some basic knowledge of AI principles. I’m always astonished by how even the most basic AI principles are often ignored, and how even the simplest AI training can be extremely beneficial for business executives.
  2. Create a fruitful, mutual trust relationship between decision makers and data scientists. This is greatly dependent on point 2), as without a common vocabulary there can’t be any communication. If that vocabulary exists, make sure that the boundaries between business executives and data scientists are well understood and respected. In a trusted environment, business people should worry just about expressing their business needs, and helping data scientists understand the context of the problem they’re working on. Setting development milestones or deciding on what technology to use should be in the realm of technical people. Decision makers should trust the technical team, that on their side should be open and practice an active effort to understand business needs.

As technology evolves and becomes more democratic, the winners will be the organizations that are best at integrating it in their DNA, rather than who can write the best code. To make this happen, empathy has to be in your arsenal: practice it.