What is an important part of assessing new AI enabled technologies from external vendors?
The Solutions: The Hidden Risks of Purchasing AI: How to Evaluate External Vendors?
Therefore, a salesman has just sold you an amazing new AI tool. They assure that it will auto-complete your emails, will anticipate your sales, and may even make your coffee. It appears to be glitzy, the demonstration was a nice experience, and you are tempted to sign the deal.
Stop.
The purchase of AI is not comparable to buying Microsoft Word or a Zoom subscription. The introduction of an external AI tool into your company means that you are introducing a system that will make decisions, process sensitive data, and frequently act as a black box.
Due Diligence is the most crucial aspect of vetting these vendors. You must rip open the curtain and ask the embarrassing questions. Here is precisely what you have to seek in the plain English of it.
1. The Black Box Problem (Explainability)
The most dangerous threat of AI is that even the developers do not always understand the reasons behind its actions. This is known as the Black box problem.
In cases where the AI denies a client a loan or reports an employee as a potential fraud, why?
- Bad Vendor: “The logic is proprietary, and we can not reveal it to you.
- Good Vendor: The vendor can demonstrate to you the main aspects (such as income or credit history) that influenced such a decision.
Why it is important: In case the AI makes a prejudiced or unlawful choice, and you are unable to clarify it, it is your responsibility and not the program’s.
2. The Data Diet (Privacy & Security).
AI models are starving; they feast on data in order to learn. And you must know just what they are eating.
You have to inquire of the vendor: Will you use my data to train your public model?
- The Trap: Some vendors will steal your confidential customer information and hand it over to their giant brain and then anime it on behalf of your competitors.
- The Fix: You require an assurance of “Data Isolation.” Your information must remain in your backyard, and not in the communal trash heap.
3. The “Hallucination” Check (Accuracy)
Generative AI (such as ChatGPT) is known to hallucinate – make assertions that are entirely fabricated with certainty.
This is perilous to a business. You have to evaluate the Grounding of the vendor.
- Does the AI cite its sources?
- Does it have the ability to trace back to the particular document it was used to answer a question?
- And in cases where it does not know the answer? (It must say I do not know, not invent something).
4. The “Human-in-the-Loop.”
Also, be highly suspicious of any seller who is selling so-called Fully Autonomous agents that are capable of operating without human control.
The optimal technology in 2026 is the so-called Agentic AI, those that work but consults a human about major decisions.
- Question: What is the location of the human brake pedal?
- You must have the capability to turn it off or cancel a decision in real time in case the AI begins to behave.
5. Who Owns the Output? (Intellectual Property)
This is a legal minefield. Who owns an AI tool that comes up with a piece of code, logo, or marketing blog?
- The Threat: In case the AI was trained on the copyrighted images (such as Disney cartoons), and your logo resembles them, you might face a lawsuit.
- The Request: Demand IP Indemnification. It is a legal jargon, which means: If we are sued due to your AI, pay the lawyers, but not us.
Summary Checklist
There are three questions you should ask before signing:
- Explainability: How did the AI get to make that decision?
- Privacy: Do you model your models with our data? (The answer should be NO).
- Liability: Does the AI have our back, in case the AI violates the law?
