Build software across platform using data processing, storage, and serving large scale web APIs, with cutting-edge technologies operating in real time with high availability.
Conduct quantitative analytics, data mining, and discovery
Come up with ways to make platform more scalable, resilient and reliable
Lead the transformation of a peta-byte scale batch-based processing platform to a near real-time streaming platform using technologies such as Apache Kafka, Cassandra, Spark and other open source frameworks
Ensure strong performance by implementing and refining robust data processing, REST services, RPC (in an out of HTTP), and caching technologies
Work closely with data architects, stream processing specialists, API developers, our DevOps team, and analysts to design systems that can scale elastically
Mentor other software engineers by developing re-usable frameworks
Review design and code produced by other engineers
Provide expert level advice to data scientists, data engineers, and operations to deliver high quality analytics via machine learning and deep learning via data pipelines and APIs
Embrace the DevOps mentality to build, deploy and support applications in cloud with minimal help from other teams.
Work with data scientists to operationalize machine learning models and build apps to make use of power of machine learning.
5+ years’ experience developing with a mix of languages (Java, Scala, Python etc.) and open source frameworks to implement data ingest, processing, and serving technologies in near-time basis
Experience with big data frameworks such as Hadoop & Apache Spark, No-SQL systems such as Cassandra or DynamoDB, Streaming technologies such as Apache Kafka
Understanding of reactive programming and dependency injection, such as Spring to develop REST services
Hands-on experience with newer technologies relevant to the data space such as Spark, Kafka, Apache Druid (or any other OLAP databases)
Hands-on experience with developing and deploying in a cloud native environment (preferably AWS)