What are the responsibilities and job description for the Associate - 8088719 position at Goldman Sachs?
Job Description
Job Duties: Associate, Software Engineering with Goldman Sachs Services LLC in Dallas, Texas. Apply Machine Learning and other advanced data analysis techniques for risk detection and mitigation associated with Identity and Access Management systems. Build auto-measurement metrics to provide auditors with evidence of our IT General Controls (ITGC) effectiveness and support various risk management teams with the necessary data to drive the firm’s risk programs. Design, build, and maintain big data pipelines to process datasets from a variety of data platforms including but not limited to Relation Database Management system (RDBMS) such as Sybase IQ and DB2, NoSQL, Data Lake Hadoop Distributed File System (HDFS), public cloud, etc. Work with internal teams to develop solution designs for Production Access Surveillance Program using AWS EC2, SPLUNK, REST API and Snowflake. Drive requirements and follow firm software development life cycle (SDLC) standards to develop auto-measurement metrics (data refiners) using Spark or Scala. Manage delivery of strategic initiatives and cross divisional relationships with our stakeholders to understand the business objectives and translate business requirements into technical specifications. Develop long-term vision and architecture and best practices for the data pipelines, data repositories, and data models for the team; and determine skill sets to build, grow, or recruit for the team. Provide technical assistance to team members as needed. Develop automated data refinement process for Application Risk Program to identify vulnerabilities using Kafka and Big Data Pipelines. Maintain application processing hosted on Linux machines to ensure the availability of data pipelines. Build data analytics decision-based user interface dashboards using Tableau and Data Models.
Job Requirements: Master’s degree (U.S. or foreign equivalent) in Computer Science, Computer Engineering, Data Science, or a related field and one (1) year of experience in the job offered or a related software engineering role OR Bachelor’s degree (U.S. or foreign equivalent) in Computer Science, Computer Engineering, Data Science, or a related field and three (3) years of experience in the job offered or a related software engineering role. Prior employment must include one (1) year of experience (with a Master’s degree) OR three (3) years of experience (with a Bachelor’s degree) with: SQL and optimizing SQL across datasets; distributed systems as it pertains to data storage and computing; building and optimizing ‘big data’ data pipelines, architectures, and datasets; a scripting language such as Python, Java, Spark, or Scala; Relational Database Management System (RDBMS) or NoSQL such as Snowflake, Sybase IQ, DB2 and Hadoop; working through the full software development life cycle (SDLC) including requirements gathering, design, coding, testing, documentation, deployment, and production support; leading projects across multiple regions related to data pipelines, control auto measure, or machine learning; experience with data streaming platform and AWS services such Kafka, S3 and EC2; machine learning or other advanced statistical based data analysis techniques; and Data Model tools and STAR schema.
©The Goldman Sachs Group, Inc., 2024. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity.
Job Duties: Associate, Software Engineering with Goldman Sachs Services LLC in Dallas, Texas. Apply Machine Learning and other advanced data analysis techniques for risk detection and mitigation associated with Identity and Access Management systems. Build auto-measurement metrics to provide auditors with evidence of our IT General Controls (ITGC) effectiveness and support various risk management teams with the necessary data to drive the firm’s risk programs. Design, build, and maintain big data pipelines to process datasets from a variety of data platforms including but not limited to Relation Database Management system (RDBMS) such as Sybase IQ and DB2, NoSQL, Data Lake Hadoop Distributed File System (HDFS), public cloud, etc. Work with internal teams to develop solution designs for Production Access Surveillance Program using AWS EC2, SPLUNK, REST API and Snowflake. Drive requirements and follow firm software development life cycle (SDLC) standards to develop auto-measurement metrics (data refiners) using Spark or Scala. Manage delivery of strategic initiatives and cross divisional relationships with our stakeholders to understand the business objectives and translate business requirements into technical specifications. Develop long-term vision and architecture and best practices for the data pipelines, data repositories, and data models for the team; and determine skill sets to build, grow, or recruit for the team. Provide technical assistance to team members as needed. Develop automated data refinement process for Application Risk Program to identify vulnerabilities using Kafka and Big Data Pipelines. Maintain application processing hosted on Linux machines to ensure the availability of data pipelines. Build data analytics decision-based user interface dashboards using Tableau and Data Models.
Job Requirements: Master’s degree (U.S. or foreign equivalent) in Computer Science, Computer Engineering, Data Science, or a related field and one (1) year of experience in the job offered or a related software engineering role OR Bachelor’s degree (U.S. or foreign equivalent) in Computer Science, Computer Engineering, Data Science, or a related field and three (3) years of experience in the job offered or a related software engineering role. Prior employment must include one (1) year of experience (with a Master’s degree) OR three (3) years of experience (with a Bachelor’s degree) with: SQL and optimizing SQL across datasets; distributed systems as it pertains to data storage and computing; building and optimizing ‘big data’ data pipelines, architectures, and datasets; a scripting language such as Python, Java, Spark, or Scala; Relational Database Management System (RDBMS) or NoSQL such as Snowflake, Sybase IQ, DB2 and Hadoop; working through the full software development life cycle (SDLC) including requirements gathering, design, coding, testing, documentation, deployment, and production support; leading projects across multiple regions related to data pipelines, control auto measure, or machine learning; experience with data streaming platform and AWS services such Kafka, S3 and EC2; machine learning or other advanced statistical based data analysis techniques; and Data Model tools and STAR schema.
©The Goldman Sachs Group, Inc., 2024. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity.
Construction Project Management Training 2025 Program - Associate Contracting Project Manager
Associate Contracting Project Manager - Trane Technologies Careers -
Carrollton, TX
Associate
Martenson, Hasbrouck & Simon LLP -
Dallas, TX
Associate
Greystone -
Irving, TX