Immediate Position: Data Engineer - 100 % Remote

16 Nov 2024

Vacancy expired!

Hi, Greetings from Quantum vision.com. Quantum Vision takes pride in calling ourselves a business-aligned technology services provider with proven capabilities in software application development, product development, application management, and systems and technology integration services. For the 5th time, Quantum Vision has made the list of the fastest-growing private companies in America. We're delighted to have been included in the Inc. 5000 list for (2017,2016, 2015, 2014, and 2013). If you are available and comfortable with below requirement please send me your updated resume and contact details to soujanyareddy@quantumvision.com or reach me at: (240)-657-1116 Job Title: Data Engineer Duration: 4 Years / Full time on W2 Location: Herndon, VA or Washington, DC (REMOTE) The work location is 100% remote for now and we will meet with the team and clients in our Herndon office or DC as needed if the candidates are local. We are open to full remote positions as long as candidates can support the program between the core hours 9:00am-4:00pm EST time. Work Authorization: Required to Authorized to Work on W2 or 1099 Tax term Legally as per the client All candidates must have resided in the US for the past 3 consecutive years. For non-US citizens, they should not have traveled more than 30 consecutive days outside of the US to get their public trust security clearance as per the client Legally. Required Skills: Streamset Data Collector, more admin experience, Controller experience. More of administrator not developer. Job Description: Seeking for 7 to 9 + Years of Expertise experience of Data Engineer is responsible for the maintenance, synchronization, cleaning, and migration of transactional data in a hybrid environment with both on-prem and highly modern cloud based Microservices environment. The Data Engineer works with the product teams in order to understand, analyze, document and efficiently implement to deliver streaming as well as batch-oriented data for synchronizing legacy and modern data stores ensuring data integrity. The Data Engineer provides support to the Application database design to aid in eliminating data duplication and enabling selective event based and schedule-based data transfer to endpoints within the cloud and legacy environment as required. Required Ability to perform as below mentioned: The Data Engineer drives towards programmatic pipeline generation and orchestration to enhance repeatability and rapid deployment, using out of the box thinking, AWS native capabilities and CI/CD tools while utilizing established design patterns and methods. The successful candidate will be able to rapidly develop technical solutions working closely with the integrated product teams and developers with minimal direction from senior or lead resources Understand data needs and be able to construct data pipelines for automating event driven bi-directional selective data replication, along with micro-batch and batch based data pipelines Standardization of data processing modules to deliver modularity and enhance re-usability Utilize identified tools and services such as AWS Glue, Python, Streamsets, Step functions and Lambda, Kinesis Data Streams and Data FirehoseCreate and maintain standards and best practices for data and pipeline standards. The candidate must have a successful track record in ETL job design, development, and automation activities with minimal supervision. The candidate will be expected to support a variety of structured, semi-structured and unstructured data in streaming and batch frameworks. The candidate must possess AWS working knowledge and proven skills on AWS data tools and services such as AWS Glue, Step Functions, Lambda and DMS. Troubleshoot, monitor, and coordinate defect resolution related to data processing & preparation. Responsible for the creation and support of all data pipeline processes across various data assets within the current scope of the system. the requirement for Data Engineer: This role deals with bi-directional synchronization of data using complex, event driven and time based ETL processes. The ETL tool of choice is Streamsets. So the candidate needs to either be a master in Streamsets or possess substantially strong abilities and experience in creating data pipelines that deal with transactional systems at both ends - source and target. To clarify - most ETL developers/Data engineers possess experience in moving data from transactional systems to data warehouses. This is where our specific requirement differs and hence needs slightly nuanced capabilities. A candidate who is intelligent, can work independently and is good at coding. Required Skills: Python (Required Highly Experienced) SQL (Required Highly Experienced) Programming skills (Mix of Shell Programming other than SQL 5+ years of hands on experience in with ETL tools such as SAP DS and Pentaho PDI 4+ years of hands on experience in with python and various python toolkits and libraries for data processing and pipelines 4+ year of experience and ability to create complex SQL queries and functions. 2+ years of hands on experience in AWS Glue, Python, Step Functions, Kinesis Data Streams and Data Firehose 1+ year of experience in java and Linux scripting is essential. 1+ year experience working with CI/CD tools including Git for ETL and scripts repository. Desired Skills: Experience of working in AWS environments and any AWS certifications is advantageous Experience working with SQL Server and tools like SSRS is advantageous. Experience with big data tools such as EMR/Spark, Databricks/PySpark is an advantage Experience working with database versioning tools such as Flyway is an advantage Experience in Ansible and Jenkins scripting is an advantage Education: Bachelor's degree in Computer Science or related discipline

  • ID: #22846728
  • State: Maryland Rockville 20849 Rockville USA
  • City: Rockville
  • Salary: $60
  • Job type: Contract
  • Showed: 2021-11-16
  • Deadline: 2022-01-09
  • Category: Et cetera