Good News & Bad News About AI with Michael Tennant
I had the chance to speak with Digital Security professional Mike Tennant about AI. Mike caters to SMEs that want to modernize without breaking the bank or leaving themselves at risk. We had a good news/bad news conversation. You probably already know the bad news: risk cannot be removed from AI. The good news is that there are ways to reduce the risk, some of which come down to human-based solutions like governance, following your own policies, and recognizing where AI reaches its limits.
One of his suggestions is to assume AI is like water: “If you have cracks in your technology, AI is going to find them,” especially regarding “data and permissions.” He was not surprised to hear that an agent I built was able to access documents beyond those in the only folder that its guardrails gave it access to. Guardrails, he points out, aren’t hard enough, so test what AI can reach.
Staying on top of what people can reach is important, too. When people change roles or leave the organization, updating their permissions can lag, but with agentic AI, quick updates have become even more crucial.
Finally, he says to keep the old adage, “Garbage In, Garbage Out” in mind. If AI can’t access good data, has imprecise prompts or prompts without suitable examples of desired standards, or if AI is expected to do something beyond its best use cases, we shouldn’t expect that it will produce great results. The exciting things AI can do have, however, led to unrealistic expectations: “Suddenly, some people think it’s ‘Garbage In, Gospel Out.’” AI outputs aren’t gospel and, Mike says, they were never meant to be.
If you want to hear more from Mike, you can find him at yourfractionalcio.ca.