What are the responsibilities and job description for the Software Engineer position at Idexcel?
Job Title: Mid-Level Pega Developer
Location: Vienna, VA/Pensacola, FL/ San Diego, CA
Duration: Long Term
Hybrid Days in-office requirement: 2x a week or 8x a month
Skills (required – All Advanced Level):
Hands-on experience creating automated data pipelines using modern technology stacks for batch ETL, and for data processing to load advanced analytics data repositories
Hands-on experience with data warehousing, PostgreSQL, Cassandra, Cloud technologies (AWS, ADLS).
Experience with SQL, Procedures and Packages
Description:
Develop technical solutions for data acquisition, data integration, and data sharing in translating our business vision and strategies into effective IT and business capabilities through the design, implementation, and integration of IT systems utilizing legacy systems, Microsoft Azure and Pega Cloud (AWS). The Data Engineer will be responsible for guiding the design, development, of our Data with a specific focus on Azure Data Factory, ADLS pipelines, AWS Glue, AWS pipelines to support Pega Marketing systems capabilities and on integrating data from Teradata, other external source, and applications into high performing operational Hubs, Data Lake and Microsoft SQL Server. Recognized as an expert with a specialized depth and/or breadth of expertise in discipline. Solves highly complex problems; takes a broad perspective to identify solutions. Leads functional teams or projects. Works independently.
Responsibilities:
Load Customer Data into Pega CDH on Pega Cloud, monitor data quality and implement related controls.
Migrate existing ETL for the Pega marketing PaaS solution to a Pega managed service in the cloud, SaaS solution.
Design and implement data frameworks and pipelines to process data from on-premises and cloud data sources to feed into the Pega AWS (PostgreSQL), Azure Data storages, monitor data quality and implement related controls.
Technical leadership and knowledge to provide technical guidance and educate team members and coworkers on development and operations.
Evaluate existing designs, improve methods, and implement optimizations.
Document best practices for data models, data loads, query performance and enforce with team members.
Implement integration plans and interface with testing teams to incorporate plans into the integration testing process.
Perform data archival.
Analyze and validate data sharing requirements within and outside data partners.
Work directly with business leadership to understand data requirements; propose and develop solutions that enable effective decision-making and drives business objectives.
Qualifications (Required):
Bachelor’s degree in Information Systems, Computer Science, Engineering, or related field, or the equivalent combination of education, training, and experience
All Advanced level:
Very good understanding of SQL, Procedures and Packages
Hands-on experience creating automated data pipelines using modern technology stacks for batch ETL, and for data processing to load advanced analytics data repositories.
Hands-on experience with data warehousing, PostgreSQL, Cassandra, Cloud technologies (AWS, ADLS).