IAS’ Declan Gowran explores his role in the data engineering space and how leaders create cohesive environments.
Senior staff data ops engineer at IAS Declan Gowran’s journey into the data engineering world evolved organically from a broader IT infrastructure and cloud background.
He told SiliconRepublic.com, “Early in my career, I worked extensively on enterprise infrastructure, virtualisation and cloud deployments across multiple platforms, which exposed me to large scale systems and the complexities of managing data at scale. Over time, I became increasingly fascinated with the ways structured and unstructured data can drive decision making and AI applications.
“That led me to roles at Optum and IAS where I could focus on building secure, scalable data platforms, integrating DevOps, MLOps and data governance frameworks, and supporting enterprise AI workloads. Essentially, my path was shaped by a mix of curiosity, technical challenges and opportunities to work at the intersection of cloud, data, and analytics.”
What is the current data engineering landscape like in Ireland?
Ireland’s data engineering landscape is vibrant and rapidly maturing. With the strong presence of multinational tech companies and data driven organisations, there is a growing demand for engineers who can not only manage cloud infrastructure but also design modern, scalable data platforms. Organisations are increasingly adopting cloud native architectures, Kubernetes based platforms and MLOps frameworks. There’s also an emphasis on governance, compliance and data mesh strategies, particularly for companies handling sensitive or regulated data.
What are the biggest challenges currently impacting the data engineering sector and how might they be addressed?
Data governance and trust at scale. As data powers AI and decision-making, ensuring quality, lineage and secure access, while meeting regulations like GDPR, is critical. This requires strong governance frameworks and centralised metadata to maintain consistency and control. Complexity across distributed environments. Most organisations operate across multi-cloud and hybrid systems, which makes integration, standardisation and orchestration difficult. The focus here is simplifying architectures and using scalable, interoperable platforms to reduce fragmentation. Scaling for real-time and AI-driven workloads. There’s increasing demand for low-latency data and reproducible AI pipelines. This means investing in streaming, automation and reliable infrastructure that can handle both batch and real-time use cases. Overall, the solution isn’t just tooling, it’s aligning these capabilities to clear business outcomes, so data engineering drives measurable value rather than just technical capability.
What are you currently working on and what is its potential?
I’m currently leading the development of a secure, cost-optimised enterprise data platform at IAS, built on Databricks and Kubernetes. It’s designed to centralise governance while enabling scalable, self-serve access to data across the business. In parallel, we’re building AI gateways and services to support secure deployment of LLM and AI workloads, ensuring we can scale these capabilities responsibly. The potential is twofold. Internally, it significantly improves efficiency, teams can access trusted data faster and experiment more easily. Externally, it enables better products and outcomes, from more effective ad campaigns to improved transparency and performance.
What goes into creating a sturdy, cohesive team in data and engineering?
Creating a high performing data and engineering team requires balancing technical expertise with collaboration, culture and shared values. I am a strong believer in investing in people and fostering a positive team environment. It’s not enough for team members to just understand the technology, they also need to get along, communicate effectively and support one another. I focus on mentorship and development, clear communication, aligning the team, cross functional collaboration, breaking down silos, analytics, empowerment and autonomy. As well as providing engineers with the right tools and frameworks to innovate while maintaining accountability. By prioritising people and culture, we create an environment where trust, communication and collaboration are strong, allowing innovation and high performance to become natural outcomes.
How can leaders in dynamic spaces create productive and cohesive working environments?
Leaders need to provide clarity, trust and structured autonomy. This involves setting clear goals, fostering a culture of feedback and encouraging innovation without micromanaging. Leveraging agile practices, automated workflows and transparent dashboards also helps teams measure progress and stay aligned. Equally important is supporting professional development, celebrating achievements and ensuring psychological safety so that team members can collaborate openly and take calculated risks.
Have you any predictions for how the data engineering space might evolve over the course of the next nine months?
Over the next nine months, I anticipate several key trends shaping the data engineering landscape. Firstly, the wider adoption of data mesh and governance frameworks, particularly for enterprises managing AI and agentic workloads, with a strong focus on data lineage, provenance and integrity knowing where data comes from, how it changes and why. The Increased emphasis on data quality and protection against data poisoning, as organisations recognise that “garbage in, garbage out” can compromise AI and agentic model outcomes. The Greater adoption of cloud native and serverless architectures, enabling scalable, flexible and cost efficient platforms capable of supporting large AI workloads, agentic processes and seamless connectivity across systems.
The Expansion of retrieval augmented generation vector databases and connected pipelines, supporting advanced AI and agentic use cases while ensuring embeddings, knowledge sources, and real time data remain accurate, auditable, and interoperable. A stronger focus on observability, performance, and compliance, with distributed monitoring, automated validation and lineage tracking becoming standard to maintain trust in both traditional data and AI outputs. Lastly the standardisation of AI model deployment and MLOps practices, enabling enterprises to scale foundation models, agentic workloads and intelligent workflows while maintaining governance, reproducibility and operational reliability.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.






You must be logged in to post a comment Login