Total Experience: 4 to 6 Years
Work Location: Bangalore
Education Qualification: BE/B.Tech/MCA/M.Sc/M.Tech
- Designing, building and optimizing an efficient big-data processing pipeline.
- Creating easy-to-use interfaces for front-end clients.
- Integrating to various internal and external data source.
- Languages and Technologies: Spark, Scala/Java, Node.js, ElasticSearch, SQL, GraphQL, Kafka,Kubernetes.
- Design and implementing big data solution
- Create simple and coherent data models for complex data
- Create and optimizing ETL jobs over complex data.
- Programme Spark jobs by using Scala/Java while following best practices.
- Identify technical debt and provide solutions to eliminate these.
- Participate in design- and code discussions with colleagues
- Are experienced in at least one object-oriented programming language (e.g. C#, C++, Java or other)
- Preferably have work experience with system-, platform-, or engine development.
- Have experience or knowledge of Node.js/TS, Spark/Scala and Kubernetes.
- Might have some experience in/knowledge of ElasticSearch, GraphQL, Kafka/RabbitMQ or other.
- Are familiar with Agile development methods (Scrum / Kanban).
- Are self-motivating and responsible for organizing your time
- Have a great deal of drive, strong commitment and a good sense of humour.
- Are a team player
- Like to work in a multi-disciplinary and cooperative environment.
- Have good communication skills and you are proficient in English, both spoken and written.
Job Location: Bangalore