Vacancy expired!
- Possess experience creating data products supporting analytic solutions.
- Should be self-motivated with strong problem-solving and self-learning skills.
- Be a strong advocate of a culture of process and data quality in a cross-functional team.
- Design, build, implement, and maintain data processing pipelines for the extraction, transformation, and loading (ETL) of data from a variety of data sources.
- Develop robust and scalable solutions that transform data into a useful format for analysis, enhance data flow, and enable end users to consume and analyze data faster and easier.
- Write complex SQL queries to support analytics needs.
- Evaluate and recommend tools and technologies for data infrastructure and processing.
- Collaborate with engineers, data scientists, data analysts, product teams, and other stakeholders to translate business requirements to technical specifications and coded data pipelines.
- Work with tools, languages, data processing frameworks, and databases such as R, Python, SQL, MongoDB, Redis, Hadoop, Spark, Hive, Scala, BigTable, Cassandra, Presto, Strom.
- Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses.
- Responsible for design, implementation, and support of data-driven projects
- Maintains strong working relationships with multiple technical and non-technical SMEs within the company
- Able to communicate project needs efficiently to parties with differing levels of technical understanding
- Well versed in on-premise SQL Server and Microsoft Azure-based SQL, as well as related concepts – administration, best-practice development, and troubleshooting.
- Able to build and maintain processes supporting data movement and transforms, including Scala/Python script development.
- 8-10 + years’ experience as a Data engineer or a similar role developing modern data pipelines and applications for analytics use cases
- Experience with Azure Data Factory, DataFlows, and DataBricks (Python/Scala familiarity is a plus).
- Working knowledge of CI/CD methodologies specific to Azure DevOps pipelines with Terraform code.
- Experience with big data technologies: Hadoop, Spark (Scala, PySpark), etc.
- Experience building and optimizing ‘big data’ pipelines and data sets using workflow management tools: Airflow, Ambari, etc.
- Experience performing root cause analysis on errors related to data ingestion, movement, and transforms.
- Experience working with unstructured datasets.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Strong technical project management and organizational skills.
- Experience with traditional data warehouse schema design (star/snowflake schema, dimensional modeling, etc.).
- College degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- ID: #49353152
- State: Missouri Kansas city 64102 Kansas city USA
- City: Kansas city
- Salary: Depends on Experience
- Job type: Contract
- Showed: 2023-02-26
- Deadline: 2023-04-23
- Category: Et cetera