Sumaiya Antara
Biography
Sumaiya Tabassum Antara is a doctoral researcher in Software Engineering at LUT University, Finland. Her research focuses on the evaluation and governance of AI systems in public administration, with particular attention to the socio-technical dimensions of responsible AI. Her work bridges software engineering, AI governance and public-sector innovation by examining how technical systems, institutional practices, and human experiences interact in AI-enabled public services.
Sumaiya holds a Master of Science in Software Engineering from LUT University, where her thesis investigated Agile–DevOps integration in regulated environments. Her research introduced a Constraint–Practice–Evidence (C–P–E) framework to explain how organizations adapt software development practices under regulatory and governance constraints, based on cross-continental qualitative research across Europe, Asia and North America. She also holds a Bachelor’s degree in Computer Science and Engineering and professional certifications including Red Hat Certified Engineer, Red Hat Certified System Adiministrator and Cisco Certified Network Associate.
Her professional background includes extensive experience in information systems audit, IT governance, and infrastructure security within highly regulated sectors such as banking, healthcare and critical infrastructure. Through her work at Grant Thornton and as a system analyst in international Agile and DevOps teams, she has conducted large-scale evaluations of IT governance controls, compliance frameworks, and secure system operations by translating technical findings into actionable governance and policy insights.
In her doctoral research, Sumaiya examines AI systems as socio-technical systems, combining conceptual analysis, expert perspectives, citizen experiences, and future-oriented methods. Her interests include responsible AI governance, AI evaluation frameworks, regulatory infrastructures, and human–technology interaction in secure and public-sector systems. She is particularly interested in how monitoring, evaluation and governance mechanisms can support adaptive and trustworthy AI deployment across different institutional and national contexts.