Vacancy expired!
- Databricks, Pyspark and Python project development work experience is a must
- Expertise in Bigdata Eco systems like HDFS and Spark
- Experience in GitHuB repository
- Data Warehouse/Data Marts/Data Modeling/Analytics experience is a must
- Able to convert the SQL stored procedures to Python code in Pyspark frame work using Dataframes.
- Develop, design, tune and maintain SSIS packages to perform the ETL process.
- Design and develop SQL Server stored procedures, functions, views and triggers to be used during the ETL process.
- SQL SERVER development work experience with relational databases and knowledge is a must
- Development of Stored Procedures for transformations in ETL pipeline
- Worked with large datasets in T-SQL batch processing
- Should have work experience in Agile projects
- Write scripts for automated testing of data in the target facts and dimensions.
- Capture audit information during all phases of the ETL process.
- Write and maintain documentation of the ETL processes via process flow diagrams.
- Collaborate with business users, support team members, and other developers throughout the organization to help everyone understand issues that affect the data warehouse
- Good experience on customer interaction is required.
- Possesses good interpersonal and communication skills.
- ID: #49149399
- State: Georgia Alpharetta 30004 Alpharetta USA
- City: Alpharetta
- Salary: $0 - $0
- Job type: Contract
- Showed: 2023-02-11
- Deadline: 2023-03-21
- Category: Et cetera