About the role
We are now India’s largest B2B ed-tech start-up enabling 15k+ educators to create their digital identity with their own branded app. We have grown more than 10X in the last year, making us India’s fastest-growing video learning platform. 6 out of 10 coaching institutes are unable to expand themselves. So we at Classplus decided to empower these coaching institutes, making them digitally enabled to compete with e-learning giants and be a part of the evolving world of e-learning.
We’re currently valued at $250 Million, with marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, and RTP Global supporting our vision. We owe everything to our people, we recently announced our first buyback worth $1M, and now as we go global, we are super excited to have new folks on board who can take the rocketship higher.
What will you do?
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and GCP technologies.
Create and maintain optimal data pipeline architecture Handle the implementation of data science models on scale.
Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
You should apply, if you
Have solid understanding of engineering best practices, continuous integration, and incremental delivery.
Have good understanding of python, sql and shell scripting.
Have knowledge of implementing and maintaining data science model on scale Strong analytical skills, debugging and troubleshooting skills, product line analysis.
Have hands-on experience on GCP, Big query, Airbyte, cloud orchestrator, cloud storage and other GCP services.
Have proficiency in usage of tools like Kubernetes, Redash or any other visualization tools is good to have.
Are a follower of agile methodology (Sprint planning, working on JIRA, retrospective, etc). Hands-on experience with the queueing system pub-sub preferred.
Knowledge on versioning like Git and deployment processes like CI CD.