Data Engineer

30 Sep 2024

Vacancy expired!

Data Engineer Job Description Our client is looking for a full-time Data Engineer. They're dedicated to building premier digital banking solutions for the middle class through a combination of technology, analytics and superior customer service. This is a high growth financial technology company that has been featured in The Wall Street Journal, The New York Times, TechCrunch, Fortune, Bloomberg, and has raised over $600 million of equity capital. Their software attempts to efficiently mitigate default risk and fraud by using machine-learning technology. Their mission is to democratize loans for everybody, helping people with less than perfect credit scores, acquire loans. They have an office in downtown Chicago but are open to fully remote candidates! The client is looking for a data engineer to join its Data Services team. As a data engineer, you will work closely with Data Management and business teams to design and build end-to-end data pipelines in their Data Lake. You'll get the chance to be a part of a fast paced and burgeoning fintech environment while working side by side with experts in the field. If you are passionate about managing data at scale and contributing to building a robust data lake, then this is the role for you. Required Skills & Experience

  • 2+ years' experience on a production big data deployment (Hadoop, MapReduce, Spark, Airflow, etc)
  • You've worked on production systems written in Python, especially using dependency management frameworks like Airflow
  • You've written and optimized large Spark, Hive, or Dremio queries
  • You have experience working on a major cloud platform (AWS, Azure, Google Cloud Platform)
  • You have strong knowledge of Software Engineering & Data fundamentals as well as DevOps best practices
  • You thrive in a collaborative environment involving different stakeholders and subject matter experts and enjoys working with a diverse group of people with different expertise
What You Will Be Doing Tech Breakdown
  • Python
  • Hadoop, MapReduce, PySpark, Spark, Airflow
  • AWS, Azure, Google Cloud Platform
Daily Responsibilities
  • Deploy cutting edge technology (Spark, Airflow, EMR) to solve large scale data
processing problems
  • Design, architect, implement, and document new, scalable data pipelines
  • Conduct ETL and ad hoc query performance tuning, monitoring, troubleshooting, and support
  • Audit and QA data and processes to ensure data quality and integrity throughout the data pipeline
  • Improve the customer experience and availability of our data for business users
The Offer
  • Bonus eligible
You will receive the following benefits:
  • Medical Insurance
  • Dental Benefits
  • Vision Benefits
  • Paid Time Off (PTO)
  • 401(k) {including match- if applicable}
Applicants must be currently authorized to work in the US on a full-time basis now and in the future. #careers #chicago #LI-EU1

  • ID: #46114974
  • State: Illinois Chicago 60601 Chicago USA
  • City: Chicago
  • Salary: USD TBD TBD
  • Job type: Permanent
  • Showed: 2022-09-30
  • Deadline: 2022-11-27
  • Category: Et cetera