- You will architect, design, and build critical Data Infrastructure services.
- You will build data capture and collection services using the latest technologies.
- You will design and develop core software modules used to build real-time and batch data processing.
- You will build Stream processing software to ingest, process, enrich, and store various data assets.
- You will work with all aspects of data processing with a keen eye for data quality, data integrity, and data availability.
- You will evaluate/implement new open-source and cloud-native tools and technologies, as needed.
- You will participate in the on-call rotation supporting the data platform.
- You have a BS/MS in Computer Science or a related technical field
- You have a strong software engineering background and a proven track record of delivering high-quality products at scale
- You must currently be in a hands-on role. Must have strong coding skills (Java or Scala, Python & Go would be a plus) and be able to design systems that meet the business needs.
- You must have a solid understanding of high-performance data capture and collection systems, design of APIs around these systems. Design for reliable delivery of data.
- You understand the internals of big data technologies like Kafka, Spark, Flink, ElasticSearch, etc
- You have 5+ years of experience in building high-performance data ingestion software and solving for concurrency, latency, and efficiency.
- You have experience with building API services (REST/gRPC etc) that scale to millions of requests per second and are an expert at scaling such systems.
- Excellent interpersonal, technical, and communication skills
- Ability to prioritize multiple tasks in a fast-paced environment
- Experience working with Public cloud (GCP/AWS etc).
- Experience with Containers, Kubernetes, Service Mesh and related technologies