What are the responsibilities and job description for the Sr Manager - IT Data Engineer position at Charles Schwab Inc.?
Your Opportunity
Do you want to be part of a Data Engineering team building the next generation analytics cloud platform for a leading financial firm? At Schwab, the Data and Rep Technology (DaRT) organization governs the strategy and implementation of the enterprise data warehouse, Data Lake, and emerging data platforms. The organization’s mission is to drive activation of data solutions, rep engagement (Sales, Marketing, and Service) technology, and client intelligence to achieve targeted business outcomes, address data risk, and safeguard competitive position. We help Marketing, Finance, Risk, and executive leadership make fact-based decisions by integrating and analyzing data.
We are looking for a seasoned Senior Cloud DevOps Engineer to create, maintain, support, and improve complex cloud operations for cutting edge data analytics platforms with special emphasis around continuous improvement of processes by applying modern engineering principles around DevOps and/or Site Reliability Engineering
As a senior data engineer for the Cloud Data Warehouse Platform (CDWP) ecosystem automation function, you will be responsible for partnering with cross functional application, platform, development, operations, architecture, and Business stakeholders to provide DevOps support for mission critical applications, while helping streamline, automate, and mature existing processes.
You will be working with a team of talented technologists and SMEs who bring a lot of energy, focus and fresh ideas that support our mission to provide value by seeing the world “Through Clients' Eyes”
What you are good at
- Evangelizing DevOps and SRE mindset to solve problems through systematization
- Designing & developing automation and processes to enable teams to deploy, manage, configure, scale, and monitor their applications
- Managing, evolving, and building CI/CD pipelines
- Proactively identifying performance improvements in areas like responsiveness, availability, and scale
- Delivering solutions in a complex environment including distributed systems and applications especially focused on GCP/AWS/Azure and ecosystem ETL, BI and data governance platforms
- Assisting in building world-class, multi-cloud capable, state-of-the-art products by:
- Automating build processes
- Incorporating static code quality tools
- Helping identify code promotion qualities
- Utilizing and promoting the use of advanced deployment patterns like Canary and Blue/Green
- Building highly resilient cloud eco-systems capable of high availability and scale
- Deploying to public cloud providers like GCP, AWS or Azure
- Developing and maintaining tools & utilities that help accelerate day-to-day activities of operations and support teams
- Partnering with development and platform/prod support teams at the appropriate stages in application development to ensure any new systems or projects leverage enterprise DevOps standards
- Partnering with Business stakeholders to educate and navigate them through the Schwab Technology Services (STS) standards and policies for new projects or enhancements to existing investments
- Improving self-reliance and reducing dependency on the availability of development or platform/prod support team resources for experimentation and implementation of novel DevOps or SRE automations
- Bringing a passion to stay on top of tech trends, experiment with and learn new technologies, participate in internal technology communities, and mentor other members of the team
What you have
- Bachelor of Science or equivalent in Computer Science or a related field
- Minimum of 6-8 years’ experience in in DevOps or Site Reliability Engineering for mission critical applications
- Public Cloud experience in AWS/GCP/Azure
- Hands-on experience working on DevOps automation for AWS, Azure or GCP technologies (BigQuery, GCS, DataProc, DataFusion, Cloud IAM, DLP)
- Hands-on experience with DevOps and CI/CD tools like Bitbucket, Bamboo, Github, Jenkins, Airflow, Cloud build etc.
- Proficient in one or more of the following scripting languages (Bash, Python, Perl and PowerShell)
- Experience with scripting and orchestration with IaC capabilities like Terraform
- Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet
- Experience with Splunk, Cloud Logging and other monitoring or reporting tools
- Experience with ETL products like Informatica PowerCenter or Informatica Intelligent Cloud Services (IICS) and Talend
- Experience working with both traditional (DB2, Teradata, Oracle etc.) and Big Data (Hadoop, MapR, Hortonworks, Cloudera etc.) or Cloud platforms (AWS, GCP, Azure etc.)
- Functional knowledge of financial banking, stock brokerage or insurance domains is preferred
- Critical-thinking and strong problem-solving skills with ability to analyze and understand data
- Ability to forge strong relationships and coordinate effectively with multiple stakeholders during outages and actively communicate updates to application development and Business partners
- Attitude that fosters a culture of collaboration and teamwork
- Familiar and comfortable with agile development techniques
- Demonstrated ability to work effectively within a team and with cross-functional technical and business teams