Big Data Developer (m/f/d) - Hadoop, Spark, Java, Druid

Offer by Smaato







About this job

Location options: Paid relocation
Job type: Full-time
Experience level: Mid-Level, Senior
Role: System Administrator
Industry: Ad Tech, Advertising Technology, Mobile
Company size: 201-500 people


apache-spark, apache-kafka, druid, bigdata, java, sysadmin

Job description

About the job

The Big Data Developer works on our SaaS platform, and brings passionate inquisitiveness, primary research, and forward thinking to every assignment. Through shared responsibility for all team deliverables, and communication with Product Owners as well as other stakeholders within the company, the Big Data Developer builds software to pass automated acceptance tests and to deliver sprint commitments.

You can expect a very international team of Developers who are based in Hamburg. As part of our self-organizing Scrum teams you'd be structuring 2-weeks-sprints following iterative methods. Our team sizes vary between 4 and 7 Developers. The hierarchies are set efficiently and the management style can be called hands-off; in other words, you are trusted to manage your own work.

Key responsibilities:

  • Design, develop, deliver and operate scalable, high-performance, real-time data processing software using technologies such as Kafka, Spark, and Druid
  • Work proactively on the system architecture
  • Participate in the evaluation and selection of right technologies
  • Support the business and product stakeholders by developing new and innovative data driven products
  • Implement different, customized and often totally new features belongs to your daily tasks
  • Interact with UI/UX Engineer to develop innovative ways to visualize huge amounts of data
  • Collaborate with the product management team to incorporate the needs of our customers following an agile process


  • Several years of experience in big data technologies and tools
  • You enjoy operating your applications and production and strive to make on-calls obsolete.
  • Bachelor degree in Computer Science or equivalent 
  • Acquaintenance with one of the following technologies: Spark, Kafka, Druid, NoSQL-Databases
  • Familiarity with the AWS-Stack of advantage
  • Profound experience in programming with Java, Scala or a similar programming language
  • Knowledge and experience in statistics methods, machine learning and AI
  • Experience with relational database systems

A new version is available REFRESH