Skip to Main Content

Senior Data Engineer

Scottsdale, AZ
  • Posted: over a month ago
  • Full-Time
Job Description
Infomatics is hiring a Senior Data Engineer on a direct hire/FTE basis to lead design, development and implementation efforts on our client’s Enterprise Analytics platform which includes an MPP Enterprise Data Warehouse, cloud-based Data Lake and other Big Data technologies.
Essential Duties and Responsibilities:
  • Guides the the strategic capabilities and architecture design related to Data Warehousing, Big Data and Data analytics in close collaboration with Technical Leadership.
  • Architects and develops code for large-scale ETL pipelines with data processing frameworks supporting distributed data warehouse and distributed analytic systems running in the cloud.
  • Leads side-by-side design sessions with our business experts and engineers to understand our product and business to design, socialize, and implement proper conceptual, logical and physical data models.
  • Spearheads efforts with multiple, disparate, data providers, as well as downstream consumers, to engineer efficient, tracable, and fault tolerant, batch and streaming, data pipelines to that support complex analytical analysis and enterprise-wide Business Intelligence applications
  • In collaoration with our Data Science team, lead efforts to develop and implement production grade code/pipelines, in laguages including Python, leveraging advanced statistical routines, Machine Learning and other AI technologies.
  • Responsible for oversight and improvements on approaches to monitor and support data quality automation efforts. Working along side QA Engineering, implement quality check routines as part of data processing frameworks and validates flow of information.
  • Drives engineering efforts for Data Warehouse and Big Data infrastructure needs, including but not limited to, automation of system builds, security requirements, performance requirements and logging/monitoring in collaboration with DevOps engineers
  • Resolves escalated issues related to complex data anomolies and performance. In addition, implements adjustments, conducts after action reviews to determine root cause/corrective measures, and ensures knowledge transfer to operations support team.
  • Provides mentorship and guidance to help guide and grow the technical depth of junior Data Engineers and across the BI & Analytics team.
  • Ensures documentation is complete and up to date with regard to standards, best practices and technical specifications.
  • Leads design and code review sessions and provides feedback and mentorship to peers.
  • Educates the BI & Analytics team on the latest industry technologies, trends and strategies; Provides thought leadership around approaches, patterns and best practices in the Data Engineering domain.
  • Assists employees, vendors or other customers by answering questions related to Data Warehousing and Big Data processes, procedures and services
  • Collaborate with data team, product owners, Scrum-master to refine and estimate stories/epics to deliver on commitments on time and with high quality while providing exceptional customer service
  • Provides Tier 3 support and create run books to mentor Tier 1 and Tier 2 support staff
  • Other duties as assigned
Background & Experience Required:
  • This position requires a minimum of 10 years of progressive database development and integration experience.
  • Demonstrated experience in designing and implementing large-scale logical and physical data model with expert knowledge of Kimball methodologies.
  • Demonstrated advanced/expert level experience in multiple relational (RDMS) and non-relational (NoSQL) data platforms is required.
  • Demonstrated advanced experience in SQL tuning, indexing, distribution, partitioning, data access patterns and scaling strategies is required.
  • Demonstrated advanced scripting knowledge with Jupyter/Python or R is required.
  • Demonstrated expert experience with data integrations and data processing for business intelligence and analytics workloads is required.
  • Experience in software versioning tools including git, SVN or related.
  • Advanced experience with AWS S3 or other distributed object stores, MPP Data Warehousing, (e.g. AWS Redshift), Elastic MapReduce is needed.
  • Experience implementing statistical models, ML and other AI technologies into production data is required.
  • Experience developing container
  • Demonstrated experience implamenting Big Data pipelines including Spark, Hive, Sqoop or related technologies.
  • Experience implementing streaming data pipelines to support realtime and near realtime decision making.
  • Understanding of Software Development Life Cycle (SDLC) methodologies such as Agile and Waterfall is needed.
  • Proven track record of designing data platforms leveraging large complex data sets.
Educational Requirements:
This position requires a Bachelors Degree in Computer Science, Computer Information Systems or related or equivalent experience.
Data or cloud related certifications are a plus.



Scottsdale, AZ



View all jobs at RiverPoint

What email should the hiring manager reach you at?

By clicking the button above, I agree to the ZipRecruiter Terms of Use and acknowledge I have read the Privacy Policy, and agree to receive email job alerts.