Tech Jobs for Talents without Borders
English-1st. Relocation-friendly. Curated daily by Imagine.
4,807 Jobs at 192 Companies

Quantum Data Engineer

IBM

IBM

Data Science
Texas City, TX, USA · California, USA
Posted on Saturday, November 18, 2023
Introduction
At IBM, work is more than a job – it’s a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you’ve never thought possible. Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, lets talk.

Your Role and Responsibilities
At IBM Quantum Software, we seek an experienced data architect and engineer lead. The candidate demonstrates skills in designing and implementing data acquisition pipelines to construct data warehouses and data lakes. The candidate is familiar with concepts and technologies to implement data governance, cataloguing and virtualization to facilitate data access to data analysts and various dashboards that can be compliant with common data regulations outlined by the organization. This role requires strong communication skills to work with data owners and consumers of the data, such as business and technical teams. Organizational skills in defining processes and methods to curate and present data stores in an easily consumable fashion to facilitate a self-serve data consumption pattern in a governed and secure manner.

Location are Albany NY, Yorktown Heights NY, Poughkeepsie NY, San Jose CA, Austin TX, Cambridge MA


Required Technical and Professional Expertise

  • At least 3 years of experience as a data architect building large-scale data warehouse and data lake solutions
  • At least 5 years experience as a data engineer building solutions
  • Strong command of relational databases (Postgresql preferred), data modelling and database design
  • Proven experience in managing data persistence tools, such as PostgreSQL, MongoDB, Elasticsearch
  • Document data models, architectural decisions and data dictionaries to enable collaboration, maintainability and usability of our analytics platforms and code.
  • Strong command of Python and Python scripting; experience using Python for Data Pipelines
  • Experience with Shell Scripting for task automation
  • Experience with cloud-based data and object storage services (IBM Cloud preferred)
  • Experience in architecture and implementation of data virtualization and schema management, especially using tools such as Presto and Iceberg
  • Experience with data pipeline workflow automation tools like Airflow, dbt and Airbyte
  • Strong communication skills to interact with technical and business teams to prepare data assets


Preferred Technical and Professional Expertise

  • Some Experience in cloud monitoring for database instances
  • Some Experience with cloud object storage and formats such as Parquet and Avro
  • Ability to Design and implement a complete lifecycle for data operations (DataOps)
  • Familiarity with designing and providing secure persistence and data access methods