Apply Now

ZipRecruiter
Posted on

Big Data Engineer

Patterns in Atlanta, GA, USA

Compensation

$70,000 to $130,000 Annually

Benefits Offered

401K, Medical

Employment Type

Full-Time

The Consumer Services Data Engineering Team of our client is building a new data warehouse and need a talented engineer to help drive this huge milestone across the line. The data engineer will have experience sourcing in social media data into our newly designed data warehouse.

Requirements:

· MS/BS in Computer Science, or related technical discipline

· 5+ years of industry experience, 3+ years of relevant big data/dimensional/relational

· DB experience

· 5+ year experience in Python and Snowflake. Strong programming experience in Python

· Demonstrated experience using APIs to extract data with consideration of authentication and authorization methods to adhere to privacy and security policies

· Ability to architect, design and implement solutions with AWS Virtual Private Cloud, EC2, AWS Data Pipeline, AWS Cloud Formation, Auto Scaling, AWS Simple Storage Service, EMR and other AWS products.

· Data Modelling is required as the engineer will source the data and build the data model.

· Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, Sqoop, etc.

· Experience with workflow orchestration tools like Apache Airflow

· Experience with performance and scalability tuning

· Experience in Agile/Scrum application development using JIRA.

· Experience working in a public cloud environment, particularly AWS

· Experience with BitBucket

· Implemented coding standards and long-term best practices

Required Soft Skills:

· Demonstrated experience and ability to deliver results on multiple projects in a fast-paced, agile environment

· Excellent problem-solving and interpersonal communication skills

· Strong desire to learn and share knowledge with others

· Passionate about data and striving for excellence

· Desire to learn and understand the business and communicate with business stakeholders to accomplish business rules transformations and data validation while coding

· Desire and ability to work collaboratively with your teammates to come up with the best solution to a problem. Specially with architects, product managers, scrum masters, engineers.

Nice to Have:

· Familiarity with practices like Continuous Development, Continuous Integration and Automated Testing

· Familiarity with build tools such as Cloud Formation and automation tools such as Jenkins or Circle CI

· Call centre data globally

· Understands “social media” language and sourcing in social media data – i.e. Twitter, Facebook, Instagram

· ErWin experience is a plus

· Experience working and configuring data, custom fields and reporting in Sprinklr social application is a plus.

Role Responsibilities:

· Understand the technical details of the Sprinklr platform to extract the data using their APIs

· Interrogate Sprinklr’s authentication and authorization methods to make sure it adheres to Client’s privacy and security policies

· Work with architects, product managers, and scrum masters to deliver sprint goals every two weeks

· Work with the data enablement team who are our business stakeholders to understand data requirements pertaining to metrics and quality

· Design and implement features in collaboration with team engineers, product owners, data analysts, and business partners using Agile / Scrum methodology

· Model the social media data elements and make sure it integrates with the existing data model

· Work with the Salesforce engineer to determine how to integrate Sprinklr social media data with Salesforce application

· Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes

· Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem

· Build utilities, user defined functions, and frameworks to better enable data flow patterns

· Research, evaluate and utilize new technologies/tools/frameworks centred around high-volume data processing

· Define and apply appropriate data acquisition and consumption strategies for given

· Technical scenarios

· Build and incorporate automated unit tests and participate in integration testing efforts

· Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to

· Work across teams, even third party vendors, to resolve operational and performance issues.

Posted date: 3 days ago

View all Jobs at Patterns