What are the responsibilities and job description for the Sr Data Engineer position at Thermo Fisher?
Title : Sr Data Engineer
When you are part of the team at Thermo Fisher Scientific, you’ll do important work. You’ll have the opportunity to grow and learn in a culture that empowers your development. We have created an inclusive, global environment that values the power of diverse talent, backgrounds, and experiences to drive speed, productivity, innovation, and growth. We are seeking an energetic, responsible candidate to join our growing organization
How will you make an impact?
Being part of an organization that provides analytics driven data solutions for all businesses across Thermo Fisher Scientific, you will be instrumental in helping our business partners and customers with their data and analytics needs.
What will you do?
Sr Data Engineer will join our Enterprise data platform Delivery team and will be responsible for working to Lead and develop data driven applications and automations across a variety of infrastructure (both On-Prim and Cloud). Sr Data Engineer must be able to collaboratively work in an Agile team to design, develop and maintain data structures for the Enterprise data platform. This position offers an exciting opportunity to work on processes that interface with multiple systems including AWS, Oracle, Middleware and ERPs. The candidate will be part of development projects, pilots, and advance best design practices.
The salary range estimated for this position is $85,700 - $219,200. This position will also be eligible to receive a variable annual bonus based on company, team, and/or individual performance results in accordance with company policy. Actual compensation will be confirmed in writing at the time of offer.
Responsibilities
- Lead, design, develop, deploy, and maintain mission critical data applications for Enterprise data platform
- Responsible for delivery of various data driven applications by leading several consultants.
- Technical liaison to one or two Agile delivery teams.
- Participate in all phases of the Enterprise Data platform development life cycle as appropriate including, but not limited to gathering customer requirements, defining technical requirements, creating high level architecture diagrams, data validation and training sessions
- Engage in Data solutions and Business Intelligence Projects and drive them to closure
- Lead different aspects of data analytics, data quality, machine learning, data acquisition, visualization, and some design and analysis tasks
- Coordinate & work closely with architecture and data operations teams
- For Agile Projects, collaborate with Product Owner on epic and user story definitions
- Develop documentation and training materials, participate with customer groups in the planning for longer-term systems enhancements.
Education
- Bachelor’s Degree in Computer Science or equivalent with 10 years of experience
- More than 5 years’ experience in a data engineering role with a strong understanding of technical, business, and operational process requirements.
Experience
- Total 10 years of experience in the IT, Leading and developing BI and DW applications.
- 5 Years of Experience in Data Lake, Data Analytics & Business Intelligence Solutions
- Strong Experience in ETL Tools preferably Informatica, Databricks & AWS Glue
- Excellent experience in Data Lake using AWS Databricks, Apache Spark & Python
- 2 years of working experience in a DevOps environment, data integration and pipeline development.
- 2 years of Experience with AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems
- Demonstrated skill and ability in the development of data warehouse projects/applications (Oracle & SQL Server)
- Strong real-life experience in python development especially in pySpark in AWS Cloud environment.
- Experience in Python and common python libraries.
- Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.
- Experience with source control systems such as GitHub, Bit bucket, and Jenkins build and continuous integration tools.
- Knowledge of extract development against ERPs - SAP, Siebel, JDE, BAAN preferred
- Strong understanding of AWS Data lake and data bricks.
- Experience in SAP ERP application, data and processes desired
- Exposure to AWS Data Lake, AWS Lambda, AWS S3, Kafka, Redshift, Sage Maker would be added advantage
- Experience in Supply Chain, supplier invoice, purchase order is a plus.
Skills and Abilities
- Full life cycle implementation experience in AWS using Pyspark/EMR, Athena, S3, Redshift, AWS API Gateway, Lambda, Glue and other managed services
- Experience with agile development methodologies by following DevOps, Data Ops and Dev Sec Ops practices.
- Manage life cycle of ETL Pipelines and other cloud platform tools, including GitHub, Jenkins, Terraform, Jira, and Confluence.
- Excellent written, verbal and inter-personal and stakeholder communication skills.
- Ability to analyze trends associated with huge datasets.
- Ability to work with cross functional teams from multiple regions/ time zones by effectively leveraging multi-form communication (Email, MS Teams for voice and chat, meetings)
- Excellent prioritization and problem-solving skills.
- Ability to work independently and as a member of a cross-functional team
- Good administration and time capabilities to deal with multiple projects and prioritize effectively.
- Willingness to learn, be mentored and improve
- Ability to effectively interpret data and translate into information and be able to effectively communicate the information both verbally or visually.
- Ability to multi-task and apply initiative and creativity on challenging projects.
- Strong problems solving and troubleshooting skills. Ability to transform a complex problem into smaller, manageable problems
Salary : $85,700 - $219,200