Tasks
Literature research on multimodal LLMs and code generationData preparation (text and model artifacts from vehicle development)Fine-tuning or adapter-based training of a suitable LLM (e.g., with PEFT)Modeling and preprocessing of graphical models (e.g., SysML, Simulink) for multimodal inputEvaluation of the generated code artifacts using syntactic and semantic metricsComparison with existing code generation approaches
Research tasks:
Analysis of the current state of research on multimodal Large Language Models (LLMs) and their application in automated code generationInvestigation of existing methods for processing and integrating different input modalities (e.g., text, graphics, models)Analysis of typical systems engineering artifacts (requirements, function models, e.g., SysML or Simulink) in the automotive development processIdentification of suitable training methods (e.g., fine-tuning) PEFT, RAG) for LLMs with a focus on technical application domainsResearch on metrics and methods for evaluating the quality of generated code (e.g., syntactical correctness, functional consistency)
Requirements
Degree programs:
Computer ScienceAutomotive EngineeringMechanical EngineeringElectrical EngineeringData Science or comparable degree program
Areas of study:
Software Development and ProgrammingArtificial Intelligence and Machine LearningSystems EngineeringData Science
Expert knowledge:
Fundamentals of machine learningUnderstanding of the principles of systems engineeringExperience in data preparationInitial experience in the automotive industry (e.g., through internships) desirable
IT skills:
Confident use of MS OfficeIdeally, solid knowledge of PythonMachine learning and AI frameworks (PyTorch, TensorFlow)
Soft skills:
High level of initiativeStrong analytical skillsStructured working styleAbility to work in a teamGoal orientationFor more detail, salary and company information, use the apply link
#J-18808-Ljbffr