AI Infra Engineer - Serverless LLM

Edinburgh
5 months ago
Applications closed

Related Jobs

View all jobs

Low Latency - Senior C++ Engineer - Market Data

AI Engineer

AI Software Engineer- Start up

AI Research Analyst (R&D)

AI/ML Research Fellow: Cardiothoracic Imaging & Data

AI Digital Twin Engineer - Mechanical Systems Optimisation

Job Summary

We are seeking AI Infra Engineer to design, develop, and optimize distributed AI systems for serverless AI platforms. The successful candidate will leverage expertise in large language models (LLMs), and system design to build robust, scalable solutions. This role offers a unique opportunity to contribute to innovative AI-driven systems, collaborating with cross-functional teams to deliver high-impact solutions in a fast-paced, research-driven environment.

Key Responsibilities

  • Design and implement scalable, distributed systems to support AI-driven workloads, ensuring high performance and reliability.

  • Develop robust software solutions using Python (and potentially C++) to address complex technical challenges in AI and distributed computing.

  • Work within a larger team to rapidly develop proof-of-concept prototypes to validate research ideas and integrate them into production systems and serverless infrastructure.

  • Work closely with cross-functional teams to participate in developing innovative AI infrastructure, data systems, and cloud computing technologies.

  • Implement resource scheduling and orchestration mechanisms to ensure efficient execution of distributed tasks.

    Required:

  • Education: Bachelor's or Master's degree in Computer Science or a related technical field. (PhD preferred but not required).

  • Have an in-depth understanding of distributed systems and/or cloud computing and/or ML systems and/or multi-agent systems.

  • Have an in-depth understanding of serverless platforms and containerization (e.g., Docker, Kubernetes).

  • Good programming skills, master of at least one language, such as Python, and/or C/C++.

  • Good communication and teamwork skills.

    Desired:

  • PhD in computer science, distributed systems, machine learning, or a related field.

  • Experience in the full lifecycle of developing, deploying, and maintaining large-scale cloud production systems, demonstrating expertise in scalability, reliability, and performance optimization

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Edge Computing Tools Do You Need to Know to Get an Edge Computing Job?

If you’re trying to start or grow a career in edge computing, it can feel like you’re navigating a maze of tools, frameworks and platforms — Kubernetes, Docker, IoT frameworks, AWS Greengrass, Azure IoT Edge, OpenShift, TinyML toolkits, networking orchestration, real-time streaming frameworks, and on it goes. Scroll job boards and community forums and it’s easy to conclude that unless you master every buzzword imaginable, you’ll never get a job. Here’s the honest truth most edge computing hiring managers won’t necessarily say out loud: 👉 They don’t hire you because you know every edge computing tool — they hire you because you can solve real system problems using the tools you know. Tools matter, yes — but only when they support clear outcomes: reliable systems, performance at scale, secure edge deployments and real business value. So how many edge computing tools do you actually need to know to secure a job? For most edge computing roles, the answer is fewer than you think — and a lot clearer when sorted by fundamentals and roles. This guide shows you what matters, what doesn’t, and how to focus your time wisely so you come across as capable, confident and employable.

What Hiring Managers Look for First in Edge Computing Job Applications (UK Guide)

In today’s fast-evolving tech landscape, edge computing is one of the most sought-after fields — blending distributed systems, embedded systems, networking, cloud, IoT, data and real-time processing. But that also means hiring managers are highly selective. They scan applications fast and look for signals of relevance, impact, technical depth and real-world delivery long before they read every line. This guide demystifies what hiring managers in edge computing look for first in your application — so you can tailor your CV, portfolio and cover letter to jump out of the stack. Whether you’re targeting edge systems roles, embedded IoT edge jobs, edge-native data roles, edge platform engineering or edge-AI positions, this checklist will help you position your experience in a way hiring managers can trust immediately.

The Skills Gap in Edge Computing Jobs: What Universities Aren’t Teaching

Edge computing is rapidly moving from niche concept to critical infrastructure. As organisations deploy connected devices, sensors, autonomous systems and real-time analytics, processing data closer to where it is generated has become essential. From smart cities and manufacturing to healthcare, transport, defence and telecommunications, edge computing underpins systems where latency, reliability and resilience matter. Demand for edge computing skills across the UK is rising steadily — yet employers consistently report difficulty finding candidates who are genuinely job-ready. Despite growing interest and academic coverage, universities are not fully preparing graduates for real edge computing jobs. This article explores the edge computing skills gap in depth: what universities teach well, what they consistently miss, why the gap exists, what employers actually want, and how jobseekers can bridge the divide to build sustainable careers in edge computing.