Trust is the foundation of effective teams and team member working relationships. I have worked with hundreds of team members and teams to build stronger bonds and trust. Clients call me when there are trust issues between team members getting in the way of individual or team performance. When trust becomes damaged, working relationships decline because of assumptions of bad intent, mistakes that are not forgiven, lack of transparency, etc. People manage that trust risk by building a relationship barrier to protect their status and position in their organization.

First, it is important to define trust. There are a number of definitions out there. Some are too philosophical or academic for the workplace. The most practical trust definition characteristics that I like are from the book Trust Works by Ken Blanchard et al. In this book, the authors define trust through their ABCD Trust Model, described as follows:

Able -- Demonstrating Competence. Good at what you do and use skills to help others. Knows the processes and methods to make a difference and generate quality results.

Believable - Act with Integrity. Honest, ethical and sincere. Intentions are good. Able to be genuine, authentic and vulnerable. Behaves in a way that respects others -- keeps confidences, doesn't talk behind people's backs.

Connected -- Care About Others. Has two-way working relationships and makes time for others. Shares what is on his/her mind, listens well and asks for input. Values and accepts others.

Dependable -- Maintain Reliability. Delivers as promised. Follow-through on commitments. Is accountable for his/her responsibilities.

Click on this link to see the article explaining the ABCD Trust Model.

Although all of these characteristics are important to trust, I have found that rebuilding the "Believable" characteristic is the biggest accelerator for improving trust between team members. When team members can be more vulnerable and authentic, they are in a higher state of readiness to listen from the other person's perspective and be open to learning about the genuineness of the other person's intent. In addition to you being vulnerable and authentic, becoming more "Believable" can only happen if the other person is willing to accept you despite perceptions of your shortcomings and let go of any past experiences that weakened trust.

Automation, Robotics and Artificial Intelligence -- Can It Be Trusted?

When trust in a human fails, we can at least sit down and talk to the person face-to-face, read their emotions and assess their genuineness about their intent. We can't do that with automation. We can't look it in the eye, inspect the algorithms and independently evaluate the bias in those algorithms. I definitely recommend viewing a TED talk by Peter Haas discussing the risks associated with AI and automation.

Many of you know that I am fascinated by the future and the progress of technological advances. My articles "Are You Ready to Be a Disruptive Leader?" and "What Should I Wear?" focus on the fast moving progress in automation, robotics and artificial intelligence and the leadership skills associated with driving disruptive innovation as a strategy for the future.

I just attended a two-day workshop sponsored by the Technology Collaboration Center of Houston on Automation, Robotics and Artificial Intelligence. I learned a lot from the speakers about the use of robots in nuclear waste removal, "uberizing" underwater repairs and maintenance in the oil and gas industry, use of "assisted reality" in technician inspections and the use of Amazon's AWS "algorithm store" called Amazon Sagemaker.

You will see, from my leadership competency model that "Building Trust in Automation" is one of a number of leadership competencies that are key to leading innovation and disruption inside an organization.

First, I am a proponent of greater use of automation, robotics and AI for organizations to increase their strategic advantage. "Dreaming" about the future state of your organization is healthy so the organization can assess how they need to change today for possible scenarios in the future.

If you read my book, Breakthrough Time, you learned about my "dream" of the future where my grandchildren in their 50s come back in time to get my colleague and I, as the Teamwork Sharks, to help them commercialize a revolutionary technology (or maybe it wasn't really a dream....) The question for today is what are the opportunities now with automation and AI that can either 1) automate repeatable tasks or to 2) disrupt your market with a different automation-driven business model.

Yes, the opportunities to automate are increasing exponentially. We get excited about new technology because of the promise of making business life better. But consumers and users of these automation solutions need to be careful about blindly trusting the solutions (automation bias) and consider the significant risks associated with the introduction of automation, robotics and AI. Here are some examples of risks and concerns associated with automation and robots:

  • The recent airplane disasters with the Boeing 737 Max 8 aircraft. These planes are using more advanced automation and AI technology (Maneuvering Characteristics Automation System). The facts are yet to come out but some believe that Boeing may not have have been training pilots enough to adequately collaborate with their new automation system, especially in the event of automation system failure/malfunction.
  • On March 18, 2018, an Uber self driving car killed a pedestrian in Tempe, Arizona crossing the street carrying a bicycle. According to the police report and the NTSB preliminary report, the autonomous car detected the pedestrian and attempted to initiate emergency braking but Uber had deactivated the the emergency braking system to keep the car testing from being so erratic. Also, there was a safety driver in the car that was supposed to intervene as necessary. Only problem was that the safety driver looked down at her phone right before the crash because she was watching The Voice television show on Hulu.
  • If you have Alexa, Amazon's automated assistant, you may know that when you wake Alexa (by saying "Alexa"), it records your conversations with it so it can be used to learn more about your habits and preferences. However, it is only supposed to record your conversations with Alexa and not all conversations. There are privacy risks associated with this process. Some fear that Alexa could record all your conversations. There already have been instances of unintended recorded family conversations sent to the family's contacts.

Building Trust in Automation

Building trust in automation will continue to become a major challenge for organizations in the coming years. Research and standards development is continuing on how to build trust in automation. There are some trust models emerging but none yet with the simplicity of the ABCD Model for human-to-human trust. So why don't we try applying the ABCD Model to trust in automation:

Able -- Demonstrating Competence. Automation or robotic systems are capable of making decisions and executing the tasks in a quality/safe manner to achieve the automation objectives. The biggest challenge is often considered in what are called "Edge Cases" where the situation was quite unusual and may not have been predicted in the programming or AI. Many automation and robotics systems developers are finding that greater collaboration between humans and those systems help to mitigate risks and provide safeguards that the system capabilities are what is needed.

Believable - Act with Integrity. Since automation does not have intentions that are good or bad, we can only assess whether they are operating in an ethical manner that respects the values of the organization/people they are serving. The Institute of Electrical and Electronics Engineers (IEEE) recently released "Ethically Aligned Design -- A Vision for Prioritizing Human Well Being with Autonomous and Intelligent Systems (A/IS)." The general principles in this vision are a good place to start in assessing your organization's ethics risk with automation in prioritizing the human well being of customers and staff.

Connected -- Care About Others. Automation does interface with humans and are executing a task to serve those humans. Therefore a two-way working relationship and an ability to interact is necessary. Researchers are finding that robots searching for objects in a remote location that ask questions of humans to validate what they have found can be a great trust builder. NASA's Robonaut 2 verbally validates their "understanding" of the human's verbal instructions.

Dependable -- Maintain Reliability. It is important that the automation can be counted on to deliver as promised. Advanced metrics and human monitoring are recommended to assess whether these systems are operating as expected and to be considering information that might inform about possible "Edge Cases." Safeguards, human or automated, need to be robust to reduce the risk of those Edge Cases.

Building trust in automation is evolving. The following are some of my recommendations for your organization to build trust in automation systems you are looking to adopt or implement:

Recommendation 1: Work with automation solution providers to increase transparency about how actual decision making made by algorithms are impacted by types of decision inputs. This is referred to as "explainable AI." Let's use the self driving car as an example. If you fear a self-driving car will not stop for you in a crosswalk, the car manufacturer could provide some transparency on the factors that are being used in deciding to stop or go -- such as understanding how the radar and cameras work and what the risk factors, how much of the decision is based on LIDAR light based sensors are at the top of the car and how much on radar in front of the car that uses radio waves. LIDAR light based sensors will not be as effective in fog or heavy snow while the radar in the front of the car doesn't lose its effectiveness in fog or snow. Also, some suggest that those cars provide signals about what the car is doing -- a green light if the car is accelerating, red light if it is going to slow down or stop. This would alert pedestrians to make their own judgments as to their own safety risk.

Recommendation 2: Mitigate risk through establishing limits of automated decision making authorities and delay decisions for non-urgent exception conditions for a human review. If a company is using Robotic Process Automation for preparing checks or ACH payments under $750, then an exception condition needing a human review will be created if the data the robot is using has a check amount for $1,000.

Recommendation 3: Crawl before you walk. Use of automation, robots and AI should start with a higher degree of collaboration between the automation and the operators. Replacing operators (humans performing a task) with automation should not be the first step. Once trust in the automation system is established, the knowledge of those operators should be leveraged as much as possible focusing on tasks of higher value. That may involve repositioning and retraining so they are focused on automation system monitoring, interpreting advanced metrics, collaborative networking to explore additional innovation opportunities, etc.

* *

When I was younger, we always thought life would become easier as more technology gets utilized. I think life is becoming more productive but I don't believe it is easier.

The robots are coming. It is up to us to find effective uses of automation, robots and AI in the workplace to better serve customers and gain a competitive advantage in the market. We need to build up our mechanisms to increase trust in the organization for those automation systems. Otherwise, let's talk. I might have some swamp land to sell to you.

Mike Goodfriend is a Teamwork Engineer, Leadership Coach and Meeting Facilitator. He has been helping teams and leaders build stronger trust and working relationships for more than 20 years. Contact Mike if you are considering a leadership program for your high potential leaders or coaching for a leader to drive innovation. Mike can be reached at mikeg@goodfriendconsulting.com.

© Goodfriend & Associates, Inc., 2019

Sign Up for new Insights


From Our Readers

“I loved your article about Team Offsites. I have been to many offsite meetings that failed because one or more of the items you mentioned was missing."

“I thought the insights you provided were thought provoking and on target. You have a gift at taking these "common life" situations and drawing strong parallels with the business world. Thank you!”

"Mike, thanks for sharing, some good learnings and enjoyed the correlation. I will have to use this on my British colleagues."

"Awesome news letter! It made me smile and refresh some great memories I had as a kid.   Greatest lessons in life I ever learned were on a baseball diamond as a kid. Thanks." 

Dr. Jekyll and Mr. Hyde.  A clever way to explain and consider the Birkman Method. I appreciate you sending this to me!

"Well done.  Your best Goodfriend Insights yet."