Canada has an AI problem, but not the one you might expect.
The talent and the technology are here, with generative AI alone projected to add an estimated $187 billion each year to the economy by 2030. Yet organizations remain hesitant when it comes to adoption.
According to Statistics Canada, only 12.2 percent of Canadian firms have integrated AI into their operations over the past year. That places Canada among the lowest among global competitors, and further compounds the country’s existing productivity challenges.
“Users should be in control of where and how their data is routed, but most companies lack the tools to track that.”
Medi Naseri,
LōD Technologies
What’s the holdup? A big part comes down to managing energy costs and performance, said Medi Naseri, CEO of Vancouver-based LōD Technologies. AI tools like large language models, deep-learning systems and AI-powered analytics require massive compute power to process requests. The International Energy Agency estimates that global electricity demand from data centers could more than double by 2030, with AI being the most significant driver.
For organizations running AI at scale, every request becomes an important business decision. Every AI request comes with a cost, and that cost mounts when faster responses are required, as more compute resources and energy are consumed. For those running thousands or even millions of requests across different servers, each with different energy demands and operating costs, the expenses can quickly spiral.
Without an efficient way to manage those tradeoffs, said Naseri, “You could end up paying millions in monthly bills.”
Building control into AI
Naseri’s company saw an opening in that challenge. LōD Technologies started in 2021 helping data centers optimize energy costs in deregulated electricity markets. The company built tools to enable operators to respond in real time, running more compute when energy is cheap and scaling back when prices spike. That business now serves more than 100 sites across Texas and internationally.
About two years ago, as Gen AI entered the mainstream, LōD began exploring how to apply its energy expertise to AI workloads.
But as the team started building solutions, they encountered an unexpected problem. Routing AI requests across multiple servers made sense for efficiency, but for customers handling sensitive information, it raised a critical compliance question. Where exactly does the data end up?
“Governance became a huge focus for us while we were designing this platform,” said Naseri. “Users should be in control of where and how their data is routed, but most companies lack the tools to track that. That’s one of the main reasons that most AI pilots fail.”
That lack of control can lead to bigger problems. Many companies want to use AI and recognize its potential, but without clear visibility over where their data goes, they delay implementation. Meanwhile, employees aren’t waiting. Frustrated by delays, many are using personal ChatGPT and Claude accounts to get work done, often without their employer’s knowledge.
“They are increasing their risk by avoiding facing the situation,” said Naseri.
Building on energy management expertise
That realization led to CLōD, an AI inference platform launched earlier this year. Operating at the inference layer (the point where AI models process requests and generate responses), CLōD is designed to act as an intelligent gateway between companies and the models they use, whether hosted by OpenAI, Anthropic, Google, or run on premises.
Unlike tools that specialize in either governance or cost optimization, CLōD offers comprehensive control across multiple dimensions, including cost management, latency optimization, model routing, model behavior, governance, safety, compliance, privacy and energy efficiency. Users define their own rules about what data can leave their environment, which servers requests can route to and how models should perform per request. The platform enforces them automatically.
The platform’s approach to control extends across several areas. For cost and performance, users can choose which model to use on every request, and whether to prioritize speed, cost efficiency, or low latency.
CLōD also addresses governance by blocking restricted information and logging information for audits, with options for private, isolated environments for sensitive workloads. These features are particularly important for organizations in heavily regulated sectors like healthcare and finance, where strict data policies determine whether AI adoption is even possible.
“Let’s say you’re a large health authority dealing with personal health information,” Naseri said. “Without control, if someone brings up AI, you have to say, ‘Stop here. That’s not something we can do. We’d love to – it could save us millions – but we have certain requirements.’”
Supporting these controls is an energy optimization capability that will soon go live. Built on LōD Technologies’ expertise in data center energy management, it is designed to intelligently manage compute resources based on real-time energy pricing.
Earlier this year, LōD was selected for Google’s inaugural AI for Energy accelerator, one of 15 North American startups chosen. Over four months, the company worked with Google’s data center and AI teams to refine the platform’s approach to both energy optimization and governance.
Naseri said LōD undergoes annual SOC 2 Type 2 audits, a standard requirement for working with organizations where mishandling data carries legal and financial consequences.
Unlocking adoption through predictability
LōD has grown from eight to 18 employees over the past year, and Naseri said another funding round is in the works for 2026. He credits Vancouver’s talent pool, largely fed by local universities, and said government support for tech companies has been strong. Naseri himself completed his PhD in electrical engineering at Simon Fraser University and launched LōD after going through SFU’s Invention to Innovation program at the Beedie School of Business.
He said the company’s success to date is proof that Vancouver has what it takes to compete as a global innovation hub. “But we need more people to take the risk, to bring capital to the city to scale up such technologies.”
As more organizations look to bridge the AI-adoption gap, Naseri said the solution isn’t just about making AI more powerful. It’s about making it more predictable. Better control over costs, latency, routing, model behavior and governance means organizations can move forward with confidence, knowing exactly how their AI systems will behave.
For heavily regulated industries that have been sitting on the sidelines, that predictability could be the difference between staying stuck in pilot mode and putting AI to work.
“We are building decentralized infrastructure for reliable AI,” he said. “As the usage of AI grows, and the energy concerns become bigger and bigger, we’re here to address that and, as a result, bring that reliable AI compute to the public and to the builders.”
PRESENTED BY

For a limited time, join an exclusive program designed to give your team tailored AI governance solutions, expert consulting, and up to 100M free tokens. Apply for CLoD’s Pilot Program.
