Tech Jobs for Talents without Borders
English-1st. Relocation-friendly. Curated daily by Imagine.
4,011 Jobs at 188 Companies

Senior Data Engineer

Capgemini

Capgemini

Data Science
Argentina
Posted on Oct 8, 2024

Descripción breve

We are seeking a skilled and motivated Data Engineer to join our team. The ideal candidate will be proficient in PySpark, Azure Data Bricks, Azure Data Factory, and SQL. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our data-driven initiatives.

Responsibilities:

  1. Develop and maintain data pipelines using PySpark to ingest, process, and transform large volumes of data.
  2. Design and implement ETL processes using Azure Data Factory to move data between various data sources and destinations.
  3. Collaborate with data scientists and analysts to understand data requirements and translate them into technical solutions.
  4. Optimize and tune data pipelines for performance and scalability.
  5. Ensure data quality and reliability by implementing data validation and monitoring processes.
  6. Troubleshoot and resolve issues related to data pipelines and infrastructure.
  7. Develop and maintain documentation for data pipelines, workflows, and data sources.
  8. Stay up-to-date with emerging technologies and best practices in data engineering and cloud computing.

Qualifications:

  1. Bachelor's degree in Computer Science, Engineering, or related field.
  2. Strong programming skills in Python and experience with PySpark for big data processing.
  3. Proficiency in SQL for querying and manipulating data in relational databases.
  4. Hands-on experience with cloud platforms such as Azure, particularly Azure Data Bricks and Azure Data Factory.
  5. Experience designing and building scalable and reliable data pipelines.
  6. Familiarity with data modeling concepts and techniques.
  7. Excellent problem-solving and troubleshooting skills.
  8. Strong communication and collaboration skills, with the ability to work effectively in a team environment.

Preferred Qualifications:

  1. Experience with other big data technologies such as Hadoop, Kafka, or Apache Spark.
  2. Knowledge of containerization and orchestration tools such as Docker and Kubernetes.
  3. Experience with version control systems such as Git.
  4. Familiarity with machine learning concepts and frameworks.
  5. Certification in cloud computing or big data technologies is a plus.

If you're passionate about leveraging data to drive insights and decision-making, and you thrive in a fast-paced, collaborative environment, we encourage you to apply for this exciting opportunity!

#LI-AU1

#LI-Remote