What are the responsibilities and job description for the Data Architect position at SPR?
Job Number:
City :
State :
DATA ARCHITECT
WHO IS SPR?
SPR helps companies implement the right technology that helps them balance users’ expectations today while planning for tomorrow’s business demands. A technology modernization firm, SPR works together with clients to develop or modernize digital products and platforms.
We’re 200 strategists, developers, designers, architects, consultants, thinkers, and doers in Chicago and Milwaukee. We work with 150 mid- to enterprise-size clients across industries like professional services and manufacturing. We think about the end users and rigorously apply the latest technologies and frameworks to address our clients’ needs. Specializing in custom software development, cloud, data, and user experience solutions, SPR promises to Deliver Beyond the Build by providing proactive advice, sharing knowledge, responding to change in an agile way, and investing time to deeply understand our clients’ business.
We operate in a fun, casual work environment and have great benefits including competitive salary, bonuses, generous vacation time, big fitness incentives, and medical/dental/vision insurance. By joining the SPR team, you’ll be problem solving, working hard and making an impact through your projects – and you’ll be part of a unique culture and rewarded for it.
WHAT IS THE POSITION?
As a Data Architect at SPR, will provide architectural direction as needed for clients. You must be experienced in large-scale system implementations with complex data processing and analytics pipelines. You must demonstrate a deep understanding of data analytics best practices, and have experience in data modeling, data cleansing, data mining, machine learning and data virtualization. You must be able to demonstrate innovative approaches to complex problems which deliver industry-leading experiences for our clients.
RESPONSIBILITIES
| Design and maintain sustainable data architecture
| Work with business understand requirements and design innovative, repeatable solutions
| Provide guidance for new application features
| Collaborate with the data staff to create optimal data models for data ingestion and analytics
| Develop, manage and maintain data models
| Design implement and maintain database objects (tables, views, indexes, etc.) and database security
| Maintain performance through tuning, parallelism, optimization, etc.
| Ensure structure for existing data is effective
| Design, implement and maintain database objects (tables, views, indexes, etc.) and database security
| Architect/design and develop large complex ETL jobs
| Ensure data quality through creation of audit controls, proactive monitoring and data cleansing techniques
| Maintain the data warehouse performance by optimizing batch processing through parallelization, performance tuning etc.
PROFESSIONAL QUALIFICATIONS
| Motivated, self-starter with ability to learn quickly
| Experienced with SQL, python skills (R is a strong plus)
| Experience in architecting and engineering innovative data analysis solutions
| Familiarity with architectural patterns for data-intensive solutions
| Expertise in real-time streaming and migrating batch-style data processing to streaming and micro-batch solutions
| Use of distributed messaging systems to rewrite systems in place
| Knowledge of the RDBMS core principles; set up, tune, design, as well as newer unstructured data tools
| Experience developing large scale, complex logical data models (along with physical implementations of the logical models)
| Familiarity with consulting and traditional application design
| Experience estimating technical solution builds and contributing to custom proposals
| Excellent written and verbal communication skills
| Display solid problem-solving abilities in the face of ambiguity
| Must be a hands-on individual who is comfortable leading by example
| Experience with Agile Methodology
| Possess excellent interpersonal and organizational skills
| Able to manage your own time and work well both independently and as part of a team
TECHNOLOGIES WE USE
Cloud (Azure, AWS, Cloud Foundry, Heroku, Mesos, DC/OS) / / RDBMS (SQL Server, PostgreSQL, Oracle, DB2) / NoSQL (Mongo, Raven, DocumentDB, Cassandra, Maria, Riak) / Python (including Databricks) / / Big Data (Cloudera & Hortonworks Hadoop distributions, including Hive, Pig, Sqoop, Spark) / Integration Tools (Apache Nifi, Cloudera Streamsets, Azure Data Factory, AWS Glue, Talend) / ELK (ElasticSearch, Logstash, Kibana) / Machine Learning (Azure ML tooling, TensorFlow, AWS Sagemaker, scikit-learn) / Data Visualization (Grafana, Kibana) / Microsoft PowerShell / AWS SDK / Fast Data (Apache Ignite / Gridgain, Apache Geode/Pivotal Gemfire)
EDUCATION & EXPERIENCE
| Bachelor’s Degree, preferably in Data Science, Analytics, Computer Science, Engineering or Science / Technology-based disciplines
| 3-5 years of professional experience
If this sounds like the kind of challenge you would be up for every day, we would love to hear from you. We are an Equal Opportunity Employer, including disability and veteran.
Salary : $101,000 - $128,000