What are the responsibilities and job description for the Senior Data Engineer position at GM Financial?
Overview
Why GMF Technology?GM Financial is set to change the auto finance industry and is leading the path of embarking on tech modernization – we have a startup mindset, and preserve our small company culture, in a public company environment with financial stability and intense growth over a decade-long history. We are data junkies and trust in data and insights to advance our business objectives.
We take our goal of zero emission, zero collision, zero congestion, and zero friction very seriously. We believe as the auto finance market leader we are in the driver's seat to lead us in the GM EV mission to change the world.
We are building global platforms, LATAM, Europe, China – and we are looking to grow our high-performing team. GMF is comprised of over 10,000 team members globally. Join our fintech culture within a Blue-Chip company where we are changing the way we use technology to support our customers and business.
Responsibilities
About the role: We are expanding our efforts into complementary data technologies for decision support in areas of ingesting and processing large data sets including data commonly referred to as semi-structured or unstructured data. Our interests are in enabling data science and search-based applications on large and low latent data sets in both a batch and streaming context for processing. To that end, this role will engage with team counterparts in exploring and deploying technologies for creating data sets using a combination of batch and streaming transformation processes. These data sets support both off-line and in-line machine learning training and model execution. Other data sets support search engine-based analytics. Exploration and deployment of technologies activities include identifying opportunities that impact business strategy, collaborating on the selection of data solutions software, and contributing to the identification of hardware requirements based on business requirements. Responsibility also includes coding, testing, and documentation of new or modified scalable analytic data systems including automation for deployment and monitoring. This role participates along with team counterparts to develop solutions in an end-to-end framework on a group of core data technologies. Other aspects of the role include developing standards and processes for data engineering projects and cloud initiatives.
JOB DUTIES
- Code, test, deploy, Orchestrate, monitor, document and troubleshoot cloud-based data engineering processing and associated automation in accordance with best practices and security standards throughout the development lifecycle
- Work closely with data scientists, data architects, ETL developers, other IT counterparts, and business partners to identify, collect, and format data from external sources, internal systems and the data warehouse and lakehouse to extract features of interest
- Significantly contribute to evaluation, research, and experimentation efforts with batch and streaming data engineering technologies to keep pace with industry innovation while assessing business impact and viability for use cases associated with efforts in hand
- Work with data engineering related groups to inform on and showcase capabilities of emerging technologies and to enable the adoption of these new technologies and associated techniques
- Significantly contribute to the definition and refinement of processes and procedures for the data engineering practice
- Educate and develop ETL developers on data engineering cloud-based initiatives so as to enable transition to data engineer and practice
Qualifications
What makes you a dream candidate?
- Experience with processing large data sets using Hadoop, HDFS, Spark, Kafka, Flume or similar distributed systems
- Experience with ingesting various source data formats such as JSON, Parquet, SequenceFile, Cloud Databases, MQ, Relational Databases such as Oracle
- Experience with Cloud technologies (such as Azure, AWS, GCP) and native toolsets such as Azure ARM Templates, Hashicorp Terraform, AWS Cloud Formation
- Understanding of cloud computing technologies, business drivers and emerging computing trends
- Thorough understanding of Hybrid Cloud Computing: virtualization technologies, Infrastructure as a Service, Platform as a Service and Software as a Service Cloud delivery models and the current competitive landscape
- Working knowledge of Object Storage technologies to include but not limited to Data Lake Storage Gen2, S3, Minio, Ceph, ADLS etc
- Experience with containerization to include but not limited to Dockers, Kubernetes, Spark on Kubernetes, Spark Operator
- Working knowledge of Agile development /SAFe, Scrum and Application Lifecycle Management
- Strong background with source control management systems (GIT or Subversion); Build Systems (Maven, Gradle, Webpack); Code Quality (Sonar); Artifact Repository Managers (Artifactory), Continuous Integration/ Continuous Deployment (Azure DevOps)
- Experience with NoSQL data stores such as CosmosDB, MongoDB, Cassandra, Redis, Riak or other technologies that embed NoSQL with search such as MarkLogic or Lily Enterprise
- Creating and maintaining ETL processes
- Knowledgeable of best practices in information technology governance and privacy compliance
- Experience with REST APIs
- Advanced knowledge of Databricks platform and associated features including workflows, unity catalog, delta live tables, time travel, SQL statement execution API, etc
- Understanding of Databricks medallion architecture
- Advanced knowledge of programming concepts and languages including SQL and Python/PySpark
Additional Skills
- Troubleshoot complex problems and works across teams to meet commitments
- Excellent computer skills and proficiency in digital data collection
- Ability to work in an Agile/Scrum team environment
- Strong interpersonal, verbal, and writing skills
- Digital technology solutions (DMPs, CDPs, Tag Management Platforms, Cross-Device Tracking, SDKs, etc)
- Knowledge of Real Time-CDP and Journey Analytics solutions
- Understanding of big data platforms and architectures, data stream processing pipeline/platform, data lake and data lake houses
- SQL experience: querying data and sharing what insights can be derived
- Understanding of cloud solutions such as Google Cloud Platform, Microsoft Azure & Amazon AWS cloud architecture & services
- Understanding of GDPR, privacy & security topics
Experience and Education
- 5-7 years of hands-on experience with data engineering required
- 4-6 years of hands-on experience with processing large data sets required
- 4-6 years of hands-on experience with SQL, data modeling, relational databases and/or no SQL databases required
- Bachelor’s Degree in related field or equivalent work experience required
What We Offer: Generous benefits package available on day one to include: 401K matching, bonding leave for new parents (12 weeks, 100% paid), tuition assistance, training, GM employee auto discount, community service pay and nine company holidays.
Our Culture: Our team members define and shape our culture — an environment that welcomes innovative ideas, fosters integrity, and creates a sense of community and belonging. Here we do more than work — we thrive.
Compensation: Competitive pay and bonus eligibility
Work Life Balance: Flexible hybrid work environment, 2-days a week in office