Vacancy expired!
- 3+ years of experience scaling and operating distributed systems like big data processing engines (e.g., Apache Hadoop, Apache Spark), distributed file systems (e.g. HDFS, CEPH, S3, etc.), streaming systems (e.g., Apache Flink, Apache Kafka), resource management systems (e.g., Apache Mesos, Kubernetes), or Identity and Access Management (e.g. Apache Ranger, Sentry, OPA)
- 3+ years experience with infrastructure as code and systems automation
- Fluency in Java or a similar language
- Ability to debug complex issues in large scale distributed systems
- Passion for building infrastructure that is reliable, easy to use and easy to maintain
- Excellent communication and collaboration skills
- Experience with Spark and ETL processing pipelines is helpful, but not required
- Experience with systems security, identity protocols and encryption is helpful, but not required
- Scale and operationalize privacy and security systems in a big data environment leveraging technologies like Spark, Kafka, Presto, Flink, Hadoop in both on-premise and AWS environment through automation and infrastructure-as-code
- Ensure data infrastructure offers reliable high-quality data with consistent SLAs with good monitoring, alerting and incident response and continual investment to reduce tech-debt
- Write code, documentation, participate in code reviews and design sessions
- ID: #49347448
- State: California Cupertino 95014 Cupertino USA
- City: Cupertino
- Salary: $75 - $85
- Job type: Contract
- Showed: 2023-02-21
- Deadline: 2023-04-18
- Category: Et cetera