AI Prompt Engineer (technical engineering)

  • Any
  • London WC1A
  • Digital Consutling & AI
  • market
  • S1616915PROMPT
AI Prompt Engineer, Technically Sharp & Systems-Minded
You’ll design and optimize prompts, architect LLM-powered systems and deploy scalable GenAI workflows that connect people and intelligent systems in new, high-impact ways.THE ROLE

Prompting & Reasoning Systems

  • Design, test and optimize prompts for leading frontier models (GPT-4/5, Claude 3.x, Gemini 2.x, Mistral Large, LLaMA 3, Cohere Command R+, DeepSeek).
  • Apply advanced prompting strategies:
    Chain-of-Thought, ReAct, Tree-of-Thoughts, Graph-of-Thoughts, Program-of-Thoughts, self-reflection loops, debate prompting and multi-agent orchestration (AutoGen/CrewAI).
  • Build agentic workflows with tool calling, memory systems, retrieval pipelines and structured reasoning.

GenAI Application Engineering

  • Integrate LLMs into applications using LangChain, LlamaIndex, Haystack, AutoGen and OpenAI’s Assistant API patterns.
  • Build high-performance RAG pipelines using:
    hybrid search, reranking, embedding optimization, chunking strategies and evaluation harnesses.
  • Develop APIs, microservices and serverless workflows for scalable deployment.

ML/LLM Engineering

  • Work with AI+ML pipelines through Azure ML, AWS SageMaker, Vertex AI, Databricks, or Modal/Fly.io for lightweight LLM deployment.
  • Utilize vector databases (Pinecone, Weaviate, Milvus, ChromaDB, pgVector) and embedding stores.
  • Use AI-powered dev tools (GitHub Copilot, Cursor, Codeium, Aider, Windsurf) to accelerate iteration.
  • Implement LLMOps/PromptOps using:
    • Weights & Biases, MLflow, LangSmith, LangFuse, PromptLayer, Humanloop, Helicone, Arize Phoenix
  • Benchmark and evaluate LLM systems using Ragas, DeepEval and structured evaluation suites.

Deployment & Infrastructure

  • Containerize and deploy workloads with Docker, Kubernetes, KNative and managed inference endpoints.
  • Optimize model performance with quantization, distillation, caching, batching and routing strategies.

EXPERIENCE

  • Strong Python skills, with experience using Transformers, LangChain, LlamaIndex and the broader GenAI ecosystem.
  • Deep understanding of LLM behavior, prompt optimization, embeddings, retrieval and data preparation workflows.
  • Experience with vector DBs (FAISS, Pinecone, Milvus, Weaviate, ChromaDB).
  • Hands-on knowledge of Linux, Bash/Powershell, containers and cloud environments.
  • Strong communication skills, creativity and a systems-thinking mindset.
  • Curiosity, adaptability and a drive to stay ahead of rapid advancements in GenAI.

BENEFICIAL

  • Experience with PromptOps & LLM Observability tools (PromptLayer, LangFuse, Humanloop, Helicone, LangSmith).
  • Understanding of Responsible AI, model safety, bias mitigation, evaluation frameworks and governance.
  • Background in Computer Science, AI/ML, Engineering, or related fields.
  • Experience deploying or fine-tuning open-source LLMs.

TECH STACK

LLMs: GPT-4/5, Claude 3.x, Gemini 2.x, Mistral Large, LLaMA 3, Cohere Command R+, DeepSeek
Frameworks: LangChain, LlamaIndex, Haystack, AutoGen, CrewAI
Tools: GitHub Copilot, Cursor, LangSmith, LangFuse, Weights & Biases, MLflow, Humanloop
Cloud: Azure ML, AWS SageMaker, Google Vertex AI, Databricks, Modal
Infra: Python, Docker, Kubernetes, SQL/NoSQL, PyTorch, FastAPI, Redis

Staffworx are a UK based Talent & Recruiting Partner, supporting Digital Commerce, Software and Value Add Consulting sectors across the UK & EMEA.

Upload your CV/resume or any other relevant file. Max. file size: 2 MB.

Start typing and press Enter to search

Act as lead architect for Foundry, owning solution design from ingestion and pipelines through Ontology, applications and AI use cases. Translate business problems into Foundry use cases, technical designs and deliverable roadmaps. Design and oversee data pipelines, Ontology models, security and governance patterns and application workflows in Foundry. Guide teams of data engineers, software engineers and data scientists to deliver robust, secure and maintainable Foundry solutions. Integrate Foundry with wider enterprise platforms, cloud environments and downstream analytics tools. Build trusted relationships with senior stakeholders, shaping new opportunities and ensuring value realisation from the platform. Skills and experience Significant hands-on experience delivering Palantir Foundry solutions in complex client environments. Deep Foundry technical expertise across the full stack: Pipeline Builder, Ontology, Workshop, OSDK, Code Repositories, Actions and AIP or agentic capabilities, able to build production-grade applications not just prototypes. Strong proficiency in at least one relevant programming language such as Python or PySpark, Java, Typescript or SQL. Solid understanding of data engineering, data modelling, security and governance in enterprise settings. Experience with software engineering best practices including Git-based development, testing and CI or CD. Excellent communication and stakeholder management skills, with the ability to influence and align diverse technical and business audiences. Proven leadership in building, coaching and motivating technical teams. Sector experience in Financial Services, Government, Healthcare, Energy or Manufacturing is desirable. Eligibility for, or current possession of, government security clearance is an advantage.