First impressions in sales and procurement are critical and should be approached with intention and expertise. These initial interactions set the tone for the entire relationship, influencing trust and credibility.
Repliance is an expert-driven service, which pairs industry experts with your company to respond to Vendor Security Questionnaires (VSQs) quickly and confidently. Internally, we use cutting-edge semantic extraction, but only as part of an expert system to help our analysts respond to your VSQs with thoughtful insight, appropriate detail and claims supported by documented evidence. Every question on every VSQ is answered by an expert who understands your business and its unique value.
Repliance has a principled stance towards VSQ answers: we believe they need to be justifiable, accurate, and trustworthy statements. We do not utilize Generative AI technologies to generate answers, because we feel they lack a number of underlying needed traits to meet our quality bar.
As definitions in this space are often blurry, when we say Generative AI, we’re talking about generative capabilities built on LLM or Machine Learning technologies.
Generative AI systems are trained on large datasets in order to create reasonable approximations of meaningful human language. These training sets create weights that allow the generation of linguistically coherent sentences. However, representing your business, security and infrastructure by having a machine select from one of many possible sentences means a potential customer’s first impression of your security stance is in the hands of automation.
Due to the lack of observability in these systems, it isn’t possible to know why the Generative AI might have chosen a particular answer. The statements “Your company meets this control” and “Your company doesn’t meet this control” are linguistically valid answers to a question about controls, but without a clear justification you have no way of knowing if it is based on the reality of your company or an element of training data.
Generative AI lacks the ability to understand whether a statement being made has any bearing on reality, and whether a claim being made is appropriate for the business. Generative AI can expose too much information just as easily as it can assert incorrect claims about your product, its infrastructure and specifications. Additionally, Generative AI will happily provide answers for things that are out of scope for your company, like providing answers on how you meet PCI controls when PCI is irrelevant for your product.
While the consequences of putting this in front line customer support are already clear, as evidenced by a Canadian airline being liable for a refund policy hallucination by an AI chatbot, the risk of these hallucinations in your contractual or sales obligations to your customers takes significantly longer to be realized.
As the output from a Generative AI cannot be easily deconstructed, understood or reasoned about, a first reaction might be to perform prompt engineering to create more data-driven claims. While high-quality prompt engineering can improve the relevancy and focus of answers, the scope of the prompt inputs and the focus of the domain-specific context required to answer a question needs to be simultaneously vast and precise.
Inaccurate correlations will occur, but you still have no meaningful way to address the root cause, as the randomness required to generate a novel (albeit formulaic) answer also exposes you to the risk of an incorrect answer.
Over-sharing will occur, and you will have an automated system providing your potential clients with an inappropriate level of internal system detail. The correct answer for a given customer and questionnaire requires a balance of disclosing the correct, yet succinct information in the correct format that an automated system cannot easily internalize.
Diminishing returns will occur. Prompt engineering is great, but if you already know the correct answer, writing the response is significantly more expedient, and doesn’t require spending a large portion of time doing additional meta-definition work that may or may not provide scalable value in the future.
Consider the problem of context within a familiar VSQ ask:
“Does your company use industry-standard access control practices?”.
This question can refer to many different applications of access control, including in personal, physical access, or data access, to name a few. A given question may or may not have enough context to answer it reliably, and a Generative AI system has no way to identify and highlight situations with insufficient context.
Generative AI systems can provide linguistically meaningful statements, but cannot guarantee that these statements have any bearing on the reality of your company. For example, if you answer the question based on digital access control instead of physical access control, the answer might be technically correct but out of scope, causing additional round trips, delays, or in the worst case, a customer to dropping you entirely because they don’t see you as being able to accurately assess and meet a basic control bar.
Generative AI systems are trained on a broad pool of data, and will regress to average answers when given a question. Despite the fact that your company wants to highlight how they are unique or exceptional, Generative AI provides generic answers in generally repetitive language. You can improve this with prompt engineering or training, but you still cannot verify that Generative AI will highlight all of the meaningful areas for your business based on the inability to comprehensively link changes in output caused by prompt engineering and training data changes.
You want a unique, subject matter expert voice when you interact with your customers on topics related to your security, to differentiate you from your competition. Generative AI systems won’t respond in a way that reflects the expertise and focus of your company - non-differentiated answers that fail to frame your security practices in the context of your business won’t set you apart from your competitors.
Repliance is here to take these risks out of your VSQ process, while also giving your employees time back to focus on the work you hired them to do.