Vacancy expired!
Your Opportunity
WAM Data architecture & platform engineering team supports build out of core data platform (WAM-Ex) capabilities - cloud native data data platform (Big Query/Snowflake) and core data capabilities - orchestration, data security and data quality to be shared across Wealth and asset management. Define and build best practices and standards for federated development on WAM-Ex data platform, design consistent and connected logical and physical data models across data domains and design consistent data engineering lifecycle for building data assets across initiatives on WAM-Ex. What you are good at- Work collaboratively with other engineers, architects, data scientists, analytics teams, and business product owners in an agile environment
- Architect, build, and support the operation of Cloud and On-Premises enterprise data infrastructure and tools
- Design and build robust, reusable, and scalable data driven solutions and data pipeline frameworks to automate the ingestion, processing and delivery of both structured and unstructured batch and real-time streaming data
- Build data APIs and data delivery services to support critical operational processes, analytical models and machine learning applications
- Assist in selection and integration of data related tools, frameworks, and applications required to expand platform capabilities
- Understand and implement best practices in management of enterprise data, including master data, reference data, metadata, data quality and lineage
- Bachelor's degree in computer science, information systems, math, engineering, or other technical field, or equivalent experience
- Six years of experience with Python or Java
- Four or more years of experience in building data lake, cloud data platform leveraging cloud (GCP/AWS) cloud native architecture, ETL/ELT, and data integration
- Three years of development experience with cloud services ( AWS,GCP,AZURE) utilizing various support tools (e.g. GCS, Dataproc, Cloud Data flow, Airflow(Composer), Kafka , Cloud Pub/Sub)
- Expertise in developing distributed data processing and Streaming frameworks and architectures (Apache Spark, Apache Beam, Apache Flink, )
- In-depth knowledge of NoSQL database technologies (e.g. MongoDB, BigTable, DynamoDB)
- Expertise in build and deployment tools - (Visual Studio, PyCharm, Git/Bitbucket/Bamboo, Maven, Jenkins, Nexus, )
- Five years of experience and expertise in database design techniques and philosophies (e.g. RDBMS, Document, Star Schema, Kimball Model)
- Five years of experience with integration and service frameworks (e.g API Gateways, Apache Camel, Swagger API, Zookeeper, Kafka, messaging tools, microservices)
- Expertise with containerized Microservices and REST/GraphQL based API development
- Experience leveraging continuous integration/development tools (e.g. Jenkins, Docker, Containers, OpenShift, Kubernetes, and container automation) in a Ci/CD pipeline
- Advanced understanding of software development and research tools
- Attention to detail and results oriented, with a strong customer focus
- Ability to work as part of a team and independently
- Analytical and problem-solving skills
- Problem-solving and technical communication skills
- Ability to prioritize workload to meet tight deadlines