By Kip Wolf, X-Vax Technology, @KipWolf
Despite the best efforts of those directly responsible for data integrity, human error will still find its way into the equation and undermine the efforts. Business managers and organizational leaders must always consider how human behavior will impact their data integrity, as even the most mature organizations with the most evolved data integrity practices likely experience turnover or new hires who must be educated about and integrated into the organization’s cultures and operations.
The potential for human error is directly and indirectly impacted by the corporate culture, the national/regional culture, and the quality culture of an organization. For example, aversion to confrontation, an attitude of apathy or complacency, and low morale or disinterest all effect a person’s ability to effectively address issues that can affect data integrity.
Human Error Is Influenced By Culture
The impact of national/regional culture is amplified by the international nature of modern business, particularly as it relates to intercultural communication. Different national and regional cultures communicate using different degrees of contextual communication. Japan is widely known as a high-context culture (e.g., making use of nonverbal cues and nuance), while Germany is commonly known as a low-context culture (i.e., very explicit and precise language, direct and to the point). Regional context also plays a role, such as when using local slang or colloquialisms (imagine using Boston slang in New Orleans or vice versa).
Where good communication is made in a high-context setting (i.e., significant shared experience, references, etc.), it may be more layered, implicit, and nuanced. Where good communication is made in a low-context setting (i.e., very few shared reference points, body of knowledge, or relationships), it is common to be more simple, explicit, and direct. Context is situational, relative, and changes frequently.
Colleagues in a location-specific meeting of employees from the same department and the same team may leave the meeting with an implicit understanding of what needs to be done next. However, simply adding to the meeting a team member from a different location may require higher context to result in the same level of understanding. For example, a team of packaging operators from the same site, packaging line, and shift may understand what is needed with a simple meeting conclusion being “perform line clearance.” However, adding to the same meeting a packaging team member from another site may require additional instructions and explanation (e.g., “execute SOP No. 1234 and complete the appropriate sections of the batch packaging record”).
Even with the appropriate knowledge, skills, and abilities, human error may have a greater probability of occurrence when high-context communication is employed. We have found examples of errors due to contextual misunderstandings such as: operators were noted as having assumed that another operator documented in the batch record the two-person steps that were performed; or, bioreactor was labeled incorrectly by fellow operators due to vague instructions by other operators.
Validated Processes And Modern Technology Still Won’t Prevent Human Error
Great lengths are taken to perform defensible process validation within qualified and validated technology solutions, yet humans still find ways to muck up data integrity. Consider the example I recently cited in the article titled Using the QTA to Align Data-Integrity Expectations, where two global companies headquartered in different countries managed to document “IND effective date” differently. This was the result of high-context communication across cultures where low-context communication may likely have prevented the human error.
Another example of human error likely caused by (or at least exacerbated by) high-context communication was in the case of the virtual company that failed to have “a clear definition, procedures, or agreement on the identification and controls around source/raw data” (see Startups, Cloud Storage, & Data Integrity: Don't Let This Happen To You!). Assumptions were made that the definition of source/raw data was commonly understood and data integrity errors resulted as data edits were made at multiple transactional points along the product life cycle. For example, data was rounded, significant digits were edited, or recalculation was performed at each step of the process on values related to Certificates of Analysis rather than direct transcription and true copy certification. While the operators were well-intentioned, the result was a significant negative impact to data integrity, conformance to file, and operations in general.
Hope For The Best, Assume The Worst
While communication may be crafted with high context in mind, consistently crafting communication with low context in mind significantly improves the probability of maintaining data integrity and preventing human error. In other words, we find that it is safer to be explicit even when implicit communication might be enough.
I recall a situation that I personally witnessed many years ago where the president of the manufacturing division of a top global pharmaceutical company was addressing the entire division on the eve of launching a series of new enterprise systems. It was year three of a four‑year, quarter-billion-dollar program to integrate and standardize the technology systems, where they were said to use the “same information.” In his address, the president stated that the systems use the “same information” by “talking to one another,” meaning that the key data elements were shared. He was quickly interrupted, a side-bar conversation ensued, and he became red in the face and backpedaled, explaining that the systems “duplicated information,” but did not “talk” to share information.
The situation was embarrassing, disheartening, and terribly unfortunate. It was a grave oversight, the result of high-context communication where the phrase “same information” had been used for years to describe system and process design but had very different interpretations during implementation when going into greater context. Where the president interpreted “same information” to mean an authoritative source of information that was shared with related systems through interconnection, the reality was that the systems were designed to have the information rekeyed and duplicated in each of many enterprise systems. While the new systems had many new features, a core objective of authoritative data sources through interconnectivity was missed in part because of human error influenced by culture and context.
In summary, we find that for data integrity, it is best to be direct in your communications, use simple explanations, and say what you mean. Using implied nuance or local language in procedures or business communication only serves to confuse external users or cross-cultural team members and even more so, for example, when your processes are transferred to another location or operations are consolidated as the result of merger or acquisition.
- Wolf, K. (2019, March 01). Using the QTA to Align Data-Integrity Expectations. Life Science Leader, The CDMO Leadership Awards, Supplement to 11(3), 28-30.
About The Author:
Kip Wolf is a principal at Tunnell Consulting, where he leads the data integrity practice. Wolf has more than 25 years of experience as a management consultant, during which he has also temporarily held various leadership positions at some of the world’s top life sciences companies. Wolf temporarily worked inside Wyeth pre-Pfizer merger and inside Merck post-Schering merger. In both cases he led business process management (BPM) groups — in Wyeth’s manufacturing division and in Merck’s R&D division. At Tunnell, he uses his product development program management experience to improve the probability of successful regulatory filing and product launch. He also consults, teaches, speaks, and publishes on topics of data integrity and quality systems. Wolf can be reached at Kip.Wolf@tunnellconsulting.com.