Onsite/ Hybrid/ Remote: Onsite, 4 days a week
Duration: 9 months to start with extension
Rate Range: $ 89/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except H1b
Responsibilities:
● 7 years in design, development, and maintenance of data pipelines and ETL processes.
● Collaborate with data scientists, analysts, product and other stakeholders to understand data requirements and convert them into technical specifications.
● Write code in SQL, Python, Spark, data analysis, Databricks and reporting.
● Lead projects through technical leadership in a complex, matrixed organization.
● Optimize data storage, retrieval, and processing for performance and scalability.
● Implement data quality checks and monitoring.
● Work with cloud-based technologies (e.g., AWS, GCP, or Azure).
Required Qualifications:
● Bachelor's degree in computer science, Engineering, or related field.
● 5 years of experience in data engineering.
● Proficiency in SQL, Python, or Scala.
● Experience with big data tools (e.g., Spark, Hadoop).
● Familiarity with data modeling and database design.
● Strong problem-solving skills and attention to detail.
Must haves:
- STEM Degree
- Databricks
- Python
- Must have excellent communication skills as in must be a team player, can ask questions, can provide feedback, can interact well in a team environment-can elicit roadmaps and reach out to gather data, make suggestions etc.
- Coding- all 3 are used SQL, Python, or Scala.
- Knowledge of Azure is helpful to understand
Preferred Qualifications
●Master’s degree in a relevant field.
● Certifications in cloud platforms (e.g., AWS Certified Data Analytics, Google Professional Data Engineer).
● Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch).