What are the responsibilities and job description for the Data Engineer position at Stellent IT LLC?
Job Details
Data Engineer
Location: Cincinnati, Ohio (Onsite)
Interview: Phone Skype
Location: Cincinnati, Ohio (Onsite)
Interview: Phone Skype
Must be local to Cincinnati, OH
Job Description:
TOP SKILLS NEEDED:
Need STRONG synapse experience and the ability to leverage Power BI for advanced data
visualization and reporting.
Need STRONG synapse experience and the ability to leverage Power BI for advanced data
visualization and reporting.
Top 3 Skills: Someone who is experienced in Deep knowledge of Synapse and BI
space, creating dimensional model synapse, and tuning
space, creating dimensional model synapse, and tuning
Requirements:
Experience in Data modeling and advanced SQL techniques
Experience working on cloud migration methodologies and processes including tools like
Databricks, Azure Data Factory, Azure Functions, and other Azure data services
Expert in SQL, Python, Spark, Databricks
Experience working with varied data file formats (Avro, Json, csv) using PySpark for
ingesting and transformation
Experience with DevOps process and understanding of Terraform scripting
Understanding the benefits of data warehousing, data architecture, data quality
processes, data warehousing design and implementation, table structure, fact and
dimension tables, logical and physical database design
Experience designing and implementing ingestion processes for unstructured and
structured data sets
Experience designing and developing data cleansing routines utilizing standard data
operations
Knowledge of data, master data, metadata related standards, and processes
Experience working with multi-Terabyte data sets, troubleshooting issues, performance
tuning of Spark and SQL queries
Experience using Azure DevOps/GitHub actions CI/CD pipelines to deploy code
Microsoft Azure certifications are a plus
Minimum of 7 years of hands-on experience working on design, configuration,
implementation, and data migration for medium to large sized enterprise data platforms
Experience in Data modeling and advanced SQL techniques
Experience working on cloud migration methodologies and processes including tools like
Databricks, Azure Data Factory, Azure Functions, and other Azure data services
Expert in SQL, Python, Spark, Databricks
Experience working with varied data file formats (Avro, Json, csv) using PySpark for
ingesting and transformation
Experience with DevOps process and understanding of Terraform scripting
Understanding the benefits of data warehousing, data architecture, data quality
processes, data warehousing design and implementation, table structure, fact and
dimension tables, logical and physical database design
Experience designing and implementing ingestion processes for unstructured and
structured data sets
Experience designing and developing data cleansing routines utilizing standard data
operations
Knowledge of data, master data, metadata related standards, and processes
Experience working with multi-Terabyte data sets, troubleshooting issues, performance
tuning of Spark and SQL queries
Experience using Azure DevOps/GitHub actions CI/CD pipelines to deploy code
Microsoft Azure certifications are a plus
Minimum of 7 years of hands-on experience working on design, configuration,
implementation, and data migration for medium to large sized enterprise data platforms
Core Technologies
Azure Functions
Python
SQL Server
Azure Data Factory
Azure Databricks
Terraform
Azure DevOps
GitHub / GitHub Actions
T-SQL and SQL stored procedures
Azure Log Analytics
Azure Data Lake Storage
Azure Synapse
Azure Functions
Python
SQL Server
Azure Data Factory
Azure Databricks
Terraform
Azure DevOps
GitHub / GitHub Actions
T-SQL and SQL stored procedures
Azure Log Analytics
Azure Data Lake Storage
Azure Synapse
Key Responsibilities
SQL, Python, Spark, Databricks
Working with varied data file formats (Avro, Json, csv) using PySpark for ingesting and
transformation
DevOps process and Terraform scripting
Leadership skills: must be able to put together and deliver presentations.
Solid communication while working with PMO. Must be able to translate what product is
asking for and regurgitate to team.
SQL, Python, Spark, Databricks
Working with varied data file formats (Avro, Json, csv) using PySpark for ingesting and
transformation
DevOps process and Terraform scripting
Leadership skills: must be able to put together and deliver presentations.
Solid communication while working with PMO. Must be able to translate what product is
asking for and regurgitate to team.
Navnish kumar
IT Technical Recruiter
Stellent IT Phone:
Email: navnish
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
DATA ENGINEER
TriHealth -
Cincinnati, OH
Data Engineer
Brooksource -
Cincinnati, OH
Data Engineer
Ascendum Solutions -
Cincinnati, OH