What are the responsibilities and job description for the Sr. Data Engineer/ Architect position at W3Global?
Job Description
Arial; font-size: 14px;">Roles & Responsibilities:
Perform analysis, design, development, and configuration functions as well as define technical requirements for assignments of intermediate complexity.
Participate with a team to perform analysis, assessment and resolution for defects and incidents of intermediate complexity and escalate appropriately.
Work within guidelines set by the team to independently tackle well-scoped problems.
Seek opportunities to expand technical knowledge and capabilities.
Stand up data platforms, build out ETL pipelines, write custom code, interface with data stores, perform data ingestion, and build data models
Oversee data ingesting into enterprise data mining solutions
Ability to take ownership when necessary, acting with urgency, putting customers first, and looking into the future
Solid understanding of cloud technologies, enterprise level Data Strategy and Data Governance concepts
Familiarity with data visualization tools and methodologies is a plus
Development of data pipelines, in AWS, using all types of data sets along with Redshift
Strong familiarity and hands-on experience with Databricks, Data Factory, StreamSets
Experience in writing code in Python/Scala.
Arial; font-size: 14px;">
Arial; font-size: 14px;">Experience & Qualifications:
10 years of experience in architecture, design, implementation, and analytics solutions
Hands on development Design and Develop applications using Databricks
Experience with Solutioning on AWS
Data Migration experience from other platforms to AWS
In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib.
Any programming language experience with SQL, Store procedures, Spark/Scala.
Strong understanding of Data Modeling and defining conceptual logical and physical data models.
Ability to design and demonstrate system Architecture with different environments
Ability to deal with both On-Premises and Cloud systems
Data Engineering experience
Experience working with ETL tools such as Databricks and Redshift
Experience coding in Python, and PySpark
Experience with Python SDLC tools (flake8, commitizen, CircleCI)
Comfortable working with APIs
Cloud experience, specifically working with AWS (ECS, Redshift)
Experience working with relational databases and SQL scripts..