Tech
AI integration needs accountability, not just innovation
Artificial intelligence has already embedded itself into the rhythms of modern life, shaping decisions in ways that often go unnoticed. Amy Trahey, founder of Great Lakes Engineering Group, believes that integration is exactly what makes it powerful and, in many cases, risky. From her perspective in engineering, she sees AI as something that directly influences outcomes tied to public safety, funding, and long-term trust.
Her understanding of AI began outside formal systems. It revealed itself through daily interactions with technology, from predictive recommendations to voice-enabled tools that respond almost instinctively, which paved the way for a sudden epiphany.
Amy Trahey, P.E.
She says, “I realized how AI is integrated into everything. Whether I watch something on streaming platforms, whether I’m talking on the phone, and suddenly I’m seeing ads for what I spoke about, it’s already part of how we live, and it’s moving faster than any of us can keep up with.” That speed, in her view, creates a leadership gap. Organizations are adopting AI at scale, and Trahey believes many leaders underestimate how quickly their teams are already relying on it.
She points to studies showing that nearly three out of four companies now use AI in some capacity, interpreting that as proof that passive oversight is no longer viable. “You have to realize your team is going to use it. It’s not a question anymore. So if that’s the case, then it becomes your responsibility to understand it and make sure it’s being used the right way,” Trahey explains.
Education became her first step toward that responsibility. She enrolled in a five-week intensive program focused on AI prompting, approaching it with the same discipline she applies to engineering work. What she found reshaped her perspective. “It truly is transformational technology. This is on the level of the World Wide Web, but it’s evolving even faster,” Trahey shares. “It has great power to make positive changes, and naturally, it has the potential to be used the wrong way. It all comes down to intent and whether you’re doing things with integrity.”
At Great Lakes Engineering Group, Trahey finds it imperative to establish that duality to ensure that efficiency gains are measurable. She highlights using AI to translate complex engineering briefs and updates into concise and coherent communication for clients, to generate structured meeting documentation in minutes instead of hours. The value, she posits, lies in augmenting human capability, not replacing it.
Oversight, however, remains fundamental to her process. She insists that no AI-generated output should move forward without human review, particularly in high-stakes environments. Within her work, which revolves around overseeing bridge and transportation infrastructure projects, due diligence finds greater relevance.
“It acts as an assistant for me, and sometimes as an advisor,” Trahey explains. “But everything still comes back to me. I review it before it goes anywhere. It’s known to hallucinate, and it can try to please you by giving you what it thinks you want to hear. That’s where human responsibility comes in. You cannot take your hands off the wheel.”
Responsibility extends into organizational culture as well, as Trahey recognized early that AI adoption within her team required structure, not restriction. Observing younger engineers already integrating these tools into their workflows prompted her to formalize guidelines. “We do bridge design. We’re working on things that are technically complex and tied to safety,” she says. “If people are using AI, then I need to understand it so I can create policies around what’s acceptable and what’s not. That’s part of leadership. You don’t ignore it. You define how it’s used.”
Her framework draws a clear line between ethical efficiency and misuse. Automating administrative tasks or organizing large datasets represents what she considers appropriate use. In her view, misrepresenting AI-generated work or exploiting time savings for financial gain reflects a breakdown in professional integrity. She speaks directly to that risk.
“There are people who will use it and then bill five hours for something that took five minutes. That’s not innovation. That’s a lack of integrity. And when you’re dealing with taxpayer money or public safety, that matters.”
Her concerns also extend to societal implications. Trahey believes the accessibility of AI introduces new risks that require coordinated oversight. “When something this powerful is accessible to every human being across the globe, there has to be some level of legislative involvement. We need guidelines and accountability. This isn’t just for technically savvy people anymore. This is for everybody,” Trahey shares.
Personal experience adds another layer to her perspective. Watching her son Quinn interact with AI as someone with autism has highlighted its potential and its complexity. She sees value in its ability to support communication, especially for individuals who struggle to express themselves. At the same time, she remains attentive to how that interaction is framed. “He sees it as something he can talk to, and there’s a benefit in that,” she explains. “But it’s my job to help him understand what it is and what it isn’t. It’s a tool, not a person. That distinction matters.”
Trahey’s approach to AI reflects a consistent principle. Innovation should be pursued with intention, supported by education, and governed by clear standards. She believes organizations that engage with AI thoughtfully will be better positioned to harness its benefits without compromising trust, and as the world accelerates into the new era of technological collaboration, that distinction, she says, makes all the difference.
You must be logged in to post a comment Login