Vacancy expired!
- design an Azure Data Lake solution | recommend file types for storage | recommend file types for analytical queries | design for efficient querying | design for data pruning | design a folder structure that represents the levels of data transformation | design a distribution strategy
- design a partition strategy for files | design a partition strategy for analytical workloads
- compression | partitioning | shading | different table geometries with Azure Synapse Analytics pools | data redundancy | implement distributions | implement data archiving
- deliver data in a relational star schema | deliver data in Parquet files | maintain metadata |implement a dimensional hierarch Design and Develop Data Processing
- Ingest and transform data using Spark, T-SQL, Data Factory, Synapse Pipelines
- Implement stream and batch pipelines.
- Data policies and standards: masking, encryption, row level and column level security, RBAC,
- Data retention, auditing
- Manage sensitive information Monitor and Optimize Data Storage and Data Processing
- Implement logging used by Azure Monitor, configure monitoring service
- Measure and improve data pipeline performance, cluster performance, query performance
- Manage storage related optimizations like compaction, handling data skews, tune queries using indexing, cache, trouble failed spark jobs
- ID: #44724431
- State: New Jersey Weehawken 07086 Weehawken USA
- City: Weehawken
- Salary: Depends on Experience
- Job type: Contract
- Showed: 2022-08-09
- Deadline: 2022-10-02
- Category: Et cetera