Data DevOps Engineer
iTrap
Your role
- building highly scalable data products for the automated consolidation, process and analyse large amounts of data for machine learning - from conception to prototyping to operation, batch and streaming
- evaluating the latest trends and technologies, help shaping our architecture decisions including DevOps / DataOps processes and drive innovative solutions for data management in the machine learning context
- contribute to ensuring continuous quality, reusability and performance improvement and take responsibility for applications and stakeholders during development, commissioning and operation
- automating and deploying data transformations and pipelines on the provided Data Platform
- working in an accurate (clean-code) and test-driven software development environment with a devops mindset
- in close cooperation with data scientists, you implement exciting AI use cases (e.g. entity extraction, recommender, chatbots) with data of all kinds (e.g. time series, images, documents)
Your profile
- At least 3 years of relevant professional experience as a DevOps engineer with strong inclination to data engineering
- Setup, configuration and automated deployment of Infrastructure Artifacts, with tools like Jenkins, Puppet or Terrraform
- Configuration and deployment of CI/CD Pipelines using Git or similar tools
- Experience with Cloud technologies like Microsoft Azure or similar environment
- Strong knowledge of of serverless (Azure Data Factory, Synapse Analytics) or container based (Kubernetes) application provisioning and deployment
- Strong experience in setup of network configuration, service communication and integration patterns (JDBC/ODBC, OData, Rest APIs)
- Monitoring and logging (preferably Elastic, Datadog), telemetry systems
- Deep understanding of resource management and optimization (CPU, RAM, Disk)
- Agile software development using Scrum / Kanban