What are the responsibilities and job description for the Staff Data Engineer position at Chewy?
Our Opportunity:
Chewy is seeking a Staff Data Engineer for our Discovery Labs team in Boston, MA. In this position, you will help us build scalable, robust data solutions for Discovery at Chewy. You will demonstrate a passion for delivering outstanding customer experience, and experience in building scalable solutions and will bring communication skills that allow you to instill trust in the team that you are working with. Millions of pet parents with unique needs visit Chewy.com looking for products for their beloved pets. We have the task to decide what products would be most useful to them and helping them discover those products. How do we do this? Meet the Personalization team @ Chewy. We use the best machine learning techniques and continuously test the outcomes to simplify product discovery for pet parents looking for their pet needs on Chewy.com. Our exceptional multi-disciplinary team of data scientists, data engineers, software engineers, and product managers work together to power personalized recommendations and product discovery for pet parents. Our team has single-threaded ownership of the space allowing us to decide on impactful products that we can experiment, measure with metrics and deliver at a fast pace.
What You'll Do:
You love building tools and data pipelines, can create clear and effective reports and data visualizations, and can partner with stakeholders to answer key business questions. You will also have the opportunity to display your skills in the following areas:
- Design, implement, and automate deployment of our distributed system for collecting and processing log events from multiple sources
- Design data schema and operate internal data warehouses and SQL/NoSQL database systems
- Write Extract-Transform-Load (ETL) jobs and Spark/Hadoop jobs to calculate business metrics
- Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions
- Monitor and troubleshoot operational or data issues in the data pipelines
- Drive architectural plans and implementation for future data storage, reporting, and analytic solutions
What You'll Need:
- Bachelor's degree in Computer Science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience
- Relevant work experience in analytics, data engineering, business intelligence or related field.
- Extensive experience in implementing big data processing technology: Hadoop, Apache Spark, etc.
- Experience using SQL queries, experience in writing and optimizing SQL queries in a business environment with large-scale, complex datasets
- Detailed knowledge of data warehouse technical architecture, infrastructure components, ETL and reporting/analytic tools and environments
- Experience in data visualization software (Tableau/Qlikview) or open-source project
Bonus:
- Graduate degree in Computer Science, Mathematics, Statistics, Finance, related technical field
- Strong ability to effectively communicate with both business and technical teams
- Demonstrated experience delivering actionable insights for a consumer business
- Proficiency with search technologies (Elasticsearch and the Elastic stack)
- Coding proficiency in at least one modern programming language (Python, Ruby, Java, etc)
- Experience with AWS technologies including Redshift, RDS, S3, EMR