Vacancy expired!
- Work experience with ETL, Data Modeling, and Data Architecture.
- Skilled in writing and optimizing SQL.
- Experience operating very large data warehouses or data lakes.
- Design and implement data engineering, ingestion, and curation functions on AWS cloud using AWS native or custom programming.
- Experience with Big Data technologies such as Hadoop/Hive/Spark.
- Experience in designing and highly implementing performant data ingestion pipelines from multiple sources using Apache Spark and/or Azure Databricks.
- Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data.
- Comfortable using PySpark APIs to perform advanced data transformations.
- Familiarity with implementing classes with Python.
- Integrating end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained.
- ID: #49379376
- State: Georgia Alpharetta 30004 Alpharetta USA
- City: Alpharetta
- Salary: USD TBD TBD
- Job type: Contract
- Showed: 2023-02-28
- Deadline: 2023-04-28
- Category: Et cetera