Skip to content

Senior Product Engineer - Data (f/m/x)

  • Hybrid
    • Berlin, Berlin, Germany
  • TECH

Empower global learning as a Data Engineer at Sharpist! Build data pipelines, shape AI-driven insights, and enhance personalized coaching for top brands like IKEA and LVMH. 🚀

Job description

About Sharpist

We are an exciting start-up on a journey to innovate the ed-tech industry, and deliver measurable business outcomes to our clients through a bespoke digital solution that is based on 1:1 coaching.

As Europe's leading solution for sustainable leadership development, we believe that personalized learning is an important right that everyone should have access to, and as such, we combine the individual employee's needs with the demands of global organizations for worldwide scalability.

We bet on unparalleled user engagement driven by 1:1 human interaction with a coach and a personalised digital learning journey that makes personal growth transparent, measurable, and enjoyable.🚀

What you'll do here at Sharpist

We are looking for a Senior Product Engineer with a deep specialization in data.

What does this mean? It means you are, first and foremost, a product-minded software engineer. You care about the end-user experience and business outcomes, but your primary medium for solving those problems is data infrastructure, pipelines, and architecture.

You will not be a "service bureau" answering ticket requests for SQL queries or just building internal dashboards. Instead, you will be a core builder on a stream-aligned team, treating data as production infrastructure. You will bridge the gap between backend application engineering and data engineering to architect the systems that power our core product capabilities—such as matching users to coaches, evaluating our AI agents, and personalizing the learning journey.

Your Responsibilities

As a Senior Product Engineer (Data), you will own the data layer from concept to production. Your core responsibilities include:

  • Finding and Implementing Key Metrics: You won't just move data around; you will partner closely with Product, Design, and Business stakeholders to figure out what we should be measuring. You will identify, define, and implement the core metrics and telemetry that track user behavior, drive product iteration, and directly impact business growth.

  • Building Data-Intensive Product Features: You will architect and deploy the backend data engines that power our customer-facing features. This includes our real-time coach matching algorithm and the evaluation pipelines (Evals) that ensure our AI agents are accurate and safe.

  • Engineering Enablement: Traditional Data Engineers often become bottlenecks. Your goal is to build self-service data infrastructure and abstractions. You will create the tools that allow the rest of the engineering team to independently query data, track their own metrics, and answer their own questions.

  • Treating Data as Code: You will apply rigorous software engineering practices to our data workflows, ensuring everything is version-controlled, tested, and reliable.

The Stack

We use a modern, scalable data and backend stack. Ideally, you have hands-on experience with all of these tools. However, we hire for engineering fundamentals and problem-solving ability—if you haven't used a specific tool in our stack, we expect you to be eager and capable of learning it quickly:

  • MongoDB

  • Google BigQuery

  • Google Firebase

  • Google Dataform

  • Looker Studio

  • TypeScript & Python

Job requirements

Who You Are

You are a Software Engineer whose medium is Data.

  • You are a Systems Thinker. You don't just write a query; you ask, "What happens if this data stream fails? Is this operation idempotent? How do we handle eventual consistency in the UI?"

  • You are a Builder. You are comfortable jumping into the backend codebase to expose your data via an API. You don't stop at the database layer.

  • You have Engineering Rigor. You believe that data pipelines deserve the same standard of care as production software: CI/CD, Unit Tests, and Observability.

The "Expertise" Bar

  • Must Have: Deep experience designing data-intensive applications. You understand the CAP theorem, race conditions, and how to model data for both relational/NoSQL (MongoDB) and analytical (BigQuery) needs.

  • Must Have: Strong programming experience is required. Ideally, you are highly proficient in TypeScript or Python, but hands-on experience building robust services in other languages is highly appreciated.

  • Nice to Have: Experience with LLM orchestration, AI evaluation pipelines, or building recommendation/matching algorithms.

Who You Are NOT

  • A "Ticket Taker": If you prefer to wait for a PM to hand you a schema definition, this role is not for you.

  • A "Pure Analyst": If you love writing SQL but hate the idea of managing infrastructure, deploying application code, or dealing with APIs, you will be unhappy here.

  • A "Lone Wolf": If you want to build complex systems in isolation without talking to Product Managers or Designers to understand the business 'why', you won't thrive in our stream-aligned teams.

What We Offer

  • Unlimited Coaching: Access certified coaches to support your personal and professional growth whenever you need it.

  • Hybrid Working Mode: Flexibility to work from our Berlin hub or remotely (within policy).

  • Growth Budget: Dedicated budget for courses, workshops, and certifications.

  • Pension Scheme (bAV): Attractive retirement savings program.

  • The "Builder" Environment: A culture that values shipping, learning, and outcome over output.

or

Apply with Linkedin unavailable
Apply with Indeed unavailable