Responsibilities
* End-to-end ownership of ML, AI and agentic AI use cases: problem framing, data preparation, modelling, orchestration, evaluation, and deployment to production.
* Design agent architectures (planner–executor, multi-agent collaboration) with memory, reflection, and tool-use; ensure robustness, transparency, and controllability.
* Implement MCP (Model Context Protocol) for standardized, secure tool integrations and capability discovery across internal and external services.
* Orchestrate LLMs and tools using agent frameworks (e.g., LangChain, LlamaIndex, Semantic Kernel, OpenAI Assistants), including function / tool calling, fallbacks, and guardrails.
* Build and optimize RAG pipelines: ingestion, chunking, embeddings, vector stores, retrieval strategies, caching, and evaluation for precision / recall and latency.
* Establish strong LLMOps / MLOps practices: experiment tracking, prompt / dataset versioning, CI / CD, model / artefact registry, monitoring, and incident response.
* Drive reliability, safety, and compliance: prompt-injection defenses, content filtering, policy enforcement, red‑teaming, and measurable quality gates.
* Conduct rigorous offline / online evaluation: backtesting, time‑series crossvalidation, A/B and shadow deployments, canary releases, drift and impact monitoring.
* Optimize performance and cost (latency, throughput, rate limits, batching, streaming, caching); maintain usage and cost dashboards.
* Collaborate with product owners, engineers, and business stakeholders; deliver scalable solutions on Databricks (AWS) and integrate with bank systems.
* Produce clear documentation, reusable templates, and playbooks; mentor teammates and contribute to community best practices.
Qualifications
* 3+ years professional experience in developing and operating software solutions.
* You are a versatile team player that enjoys collaboration but also solves problems on its own.
* 2+ years professional experience (hands‑on) with cloud computing with AWS (AWSLINUX and AWS Lambda).
* 2+ years professional experience (hands‑on) with various Database technologies and ideally Data Lake (AWS / S3).
* 2+ years professional experience working in a product‑driven environment.
* Programming skills, Phyton experience (ideally Django) and experienced in scripting RESTful API.
* You demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management.
* You have basic know‑how with ETL tools.
* You are familiar with agile working methods.
Soft Skills
* Proactivity; Curiosity; Responsibility; Ideas & Confidence.
* Structured working approach and problem-solving skills.
* Fluent English; German or another CEE language is appreciated, but not mandatory.
Should you be interested in this opportunity, please share your updated CV, your daily all-inclusive rate as well as your updated CV in Word format.
Thank you.
#J-18808-Ljbffr