Vacancy expired!
- Strong Spark programming, Python and SQL.
- Strong coding skills in PySpark - working with dataframes, reading from different sources, and writing to different resources
- Experience building data pipelines using Azure Databricks and Azure Data Factory or AWS datapipelines
- Data Lake project
- Databricks Delta tables and Databricks optimization (clusters, code, etc.)
- Analytical and problem-solving abilities
- Software development lifecycle, CI/CD
- Be familiar with Delta Sharing concept in Databricks
- 3+ years Databricks
- 3+ years professional experience in Python, Scala and associated libraries
- 3+ years professional experience with SQL and RDBMS development
- 3+ years professional experience designing and developing with a cloud service- AWS Glue or Azure
- 3+ years of professional data engineering experience
- Work in Agile environment