EnliteAI is a technology provider for Artificial Intelligence specialized in Reinforcement Learning and Computer Vision/geoAI. They offer services such as AI Strategy & Transformation, AI Lab, and Prototyping & Project Delivery. Their GeoAI technology leverages mobile mapping data to efficiently identify road signs, road markings, and road defects. They are also the maker of Maze, one of the first open-source frameworks for applied Reinforcement Learning and offer solutions for Power Grid Optimization.Experience Requirements:3+ years of work experience in data-driven environmentsPassionate about everything related to AI, Machine Learning and Computer VisionOther Requirements:Python programming skills, emphasis on data engineering and distributed processing (e.g. Flask, Postgres, SQLalchemy, Airflow)Proficiency in working with databases and data storage solutionsExperience with Kubernetes and Docker (Helm, Terraform, Amazon Kubernetes Service)Familiarity with cloud environments (AWS, Gcloud, Azure)Used to mature workflows in software development (Git, issue management, documentation, unit testing, CI/CD)Fluent in English both spoken and in written language.Valid work permit for AustriaResponsibilities:Collaborate with our machine learning and backend engineers to design and manage scalable processing pipelines in productive environments.Implement robust processing flows and I/O-efficient data structures, powering use cases such as road surface analysis, sign detection and localization on large volumes of point cloud and imagery data.Design and manage relevant database schemes in close collaboration with backend engineering.Create and maintain comprehensive documentation of the processing pipelines, database schemes, configuration and software architecture.Collaborate with our machine learning engineers and data engineers to publish our models into processing pipelines and deploy to productive environments.Design and manage our GPU & CPU server infrastructure, from on-prem Kubernetes clusters to cloud deployments.Manage and orchestrate the data pipelines and data storage systems and associated synchronization processes for model training and execution.Own our CI/CD pipelines (based on Gitlab).Establish monitoring of model, pipeline and infrastructure health. Set up logging to capture relevant information for debugging and auditing.Personal growth: Receive continuous training and education opportunities. Budget and time allotment for the pursuit of individual R&D projects, training or conference participations.Flexible work models: Remote work, an office in Vienna's 1st district and minimal core hours.
#J-18808-Ljbffr