What are the responsibilities and job description for the Data Engineer position at Circle?
Engineering at Circle:
In 2020, Circle unveiled Circle APIs: a set of solutions and smarter technology to help businesses accept payments in a more global, scalable and efficient alternative to traditional banking rails (spoiler: we're using USD Coin under the hood).
In 2021, Circle launched Circle Yield, Circle Account, and Verite to help our users earn, manage, and secure their funds. We built foundational industry partnerships which grew USDC market cap by 10x . To broaden the utility of USDC, we also added support for digital dollars on 6 new blockchains: SOL, XLM, HBAR, TRX, AVAX, FLOW.
Over the next year, we will continue to grow USD Coin and the Circle API platform to become the #1 US dollar stablecoin of the world by building, scaling and partnering with both web2 and web3 ecosystems. Expect to see product announcements and launches for L1, L2, DEFI, and Identity solutions!
You will aspire to our four core values:
- Multistakeholder - you have dedication and commitment to our customers, shareholders, employees and families and local communities.
- Mindful - you seek to be respectful, an active listener and to pay attention to detail.
- Driven by Excellence - you are driven by our mission and our passion for customer success which means you relentlessly pursue excellence, that you do not tolerate mediocrity and you work intensely to achieve your goals.
- High Integrity - you seek open and honest communication, and you hold yourself to very high moral and ethical standards. You reject manipulation, dishonesty and intolerance.
Here is our team hierarchy for individual contributors:
Principal Data Engineer (VI)
Senior Staff Data Engineer (V)
Staff Data Engineer (IV)
Senior Data Engineer (III)
Data Engineer (II)
Data Engineer (I)
Your team is responsible for:
As a member of the Data Engineering team, you own the core Big Data/ML platform, data ingestion, processing and serving, ETL/ELT pipelines, data governance and security compliances, data analytics and visualization tooling, data modeling and data warehouse that powers our Product, Engineering, Analytics, and Data Science teams for experimentation, operational excellence, and actionable insights, so as to fuel and accelerate business growth.
You'll work on:
- Work across functional teams on the design, deployment and continuous improvement of the scalable data platform that ingests, stores, and aggregates various datasets, including data pipelines, platforms, and warehouses, and surfacing data to both internal and customer-facing applications.
- Be a subject matter expert on data modeling, data pipelines, data quality and data warehousing.
- Design, build and maintain data ETL/ELT pipelines to source and aggregate the required data for various data analysis and reporting needs, as well as to continually improve the operations, monitoring and performance of the data warehouse.
- Develop integrations with third party systems to source, qualify and ingest various datasets.
- Provide data analytics and visualization tools to extract valuable insights from the data to enable data-driven decisions.
- Provide ML data platform capabilities for data science teams to perform data preparation, model training and management, and run experiments.
- Work closely with cross-functional groups and stakeholders, such as the product, engineering, data science, security and compliance teams, for data modeling, general data life cycle management, data governance and processes for meeting regulatory and legal requirements.
You'll bring to Circle (Not all required):
- Experience in multiple data technologies, such as Spark, Presto, Impala, YARN, Parquet, MLflow, Kafka, AWS Kinesis, Flink, Spark Streaming, etc.
- Experience with workflow orchestration management engines such as Airflow, Azkaban, Oozie, etc.
- Experience with Cloud Services (AWS, Google Cloud, Microsoft Azure, etc).
- Experience in SQL and NoSQL, such as MySQL, PostgreSQL, Cassandra, HBase, Redis, DynamoDB, Neo4j, etc.
- Experience in building scalable infrastructure to support batch, micro-batch or stream data processing for large volumes of data.
- Experience in data governance and provenance.
- Internal knowledge of open source or related big data technologies.
- Proficiency in one or more programming languages (Java, Scala, Python).
- Experience in similar business domains, such as payment systems, credit cards, bank transfers, blockchains, etc.
- Excellent communication skills, able to collaborate with cross-functional teams and remote teams, share ideas and present concepts effectively.
- Ability to tackle complex and ambiguous problems.
- Self-starter who takes ownership, drives results, and enjoys moving at a fast pace.
Additional Information:
- This position is eligible for day-one PERM sponsorship for qualified candidates.