Vacancy expired!
Cloud Data EngineerThis position is 100% remote. For onboarding must be 100% onsite for the first 2 weeks.About the Job
- Duration: Long term renewable contract
- Location: Greer, SC
- Pay rate: Hourly, depending on experience
- Job ID: 4208
- Implements and enhances complex data processing pipelines with a focus on collecting, parsing, cleaning, managing and analyzing large data sets that produce valuable business insights and discoveries.
- Determines the required infrastructure, services, and software required to build advanced data ingestion & transformation pipelines and solutions in the cloud.
- Assists data scientists and data analysts with data preparation, exploration and analysis activities.
- Applies problem solving experience and knowledge of advanced algorithms to build high-performance, parallel, and distributed solutions.
- Performs code and solution review activities and recommends enhancements that improve efficiency, performance, stability, and decreased support costs.
- Applies the latest DevOps and Agile methodologies to improve delivery time.
- Works with SCRUM teams in daily stand-up, providing progress updates on
- a frequent basis.
- Supports application, including incident and problem management.
- Performs debugging and triage of incident or problem and deployment of
- fix to restore services.
- Documents requirements and configurations and clarifies ambiguous specs.
- Performs other duties as assigned by management.
- BA/BS Degree in Business, Computer Science or Electrical Engineering preferred
- MS degree (preferred).
- enterprise software engineering experience with object oriented design, coding and testing patterns, as well as, experience in engineering (commercial or open source) software platforms and large-scale data infrastructure solutions.
- software engineering and architecture experience within a cloud environment (Azure, AWS).
- enterprise data engineering experience within any "Big Data " environment (preferred).
- software development experience using Python.
- working in large-scale data integration and analytics projects, including using cloud (e.g. AWS Redshift, S3, EC2, Glue, Kinesis, EMR) and data-orchestration (e.g. Oozie, Apache Airflow) technologies
- in implementing distributed data processing pipelines using Apache Spark
- in designing relational/NoSQL databases and data warehouse solutions
- in an Agile environment (Scrum, Lean or Kanban).
- in writing and optimizing SQL queries in a business environment with large-scale, complex datasets
- of Unix/Linux operating system knowledge (including shell programming).
- 1+ years of experience in automation/configuration management tools such as Terraform, Puppet or Chef.
- 1+ years of experience in container development and management using Docker.
- Languages: SQL, Python, Spark
- AWS/Azure cloud provider training/certifications (preferred)
- Cloud Security
- JIRA/Confluence/BitBucket
- Splunk
- TSL/SS Certificates
- Nice to Have: Docker/Docker Swarm, Jenkins, Terraform, Python programming
- Paid Holidays/Paid Time Off (PTO)
- Medical/Dental Insurance
- Vision Insurance
- Short Term/Long Term Disability
- Life Insurance
- 401 (K)
- ID: #43420053
- State: South Carolina Greer 29651 Greer USA
- City: Greer
- Salary: BASED ON EXPERIENCE
- Job type: Contract
- Showed: 2022-06-21
- Deadline: 2022-08-19
- Category: Et cetera