airflow Remote Jobs

139 Results

1d

Data Engineer (Stage Janvier 2025) H/F

Showroomprive.comSaint-Denis, France, Remote
airflowsqlc++

Showroomprive.com is hiring a Remote Data Engineer (Stage Janvier 2025) H/F

Description du poste

Au cœur du pôle Data de Showroomprive, vous intègrerez l’équipe «Data Engineering».  
Vos missions seront axées autour de l’extraction, du traitement et du stockage de la donnée via le maintien et l’évolution d’un Datawarehouse utilisé par le reste des équipes Data (BI, Data Science, Marketing analyst).  

 

Vos missions se découperont en deux parties :  

  • Un projet principal à réaliser de A à Z autour de la donnée, de son traitement, de son contrôle ou encore de son accessibilité.  
  • Les taches du quotidien de l’équipe (développement de nouveaux flux, exports de données métier, requêtes ad hoc, gestion d’accès…). 

Pour mener à bien ses missions, notre équipe utilise des outils à la pointe du marché en matière de traitement de la donnée grâce à Airflow pour le pipeline et à l’utilisation d’une plateforme cloud leader du marché. 

Vous intégrerez une équipe Data Engineering de 3 personnes qui vous accompagneront au quotidien pour mener à bien vos missions, mais aussi un service Data de 30 personnes aux compétences diverses et pointues dans leurs domaines. 

Qualifications

En fin de formation supérieure (Bac+5) de type Ecole d’Ingénieurs (filière liée à la Data ou Software Engineering). 

Dans le cadre de vos études ou d’une expérience précédente, vous avez pu acquérir de solides bases en SQL et Python. Vous avez aussi développé une réelle appétence à découvrir par vous-même et vous êtes très curieux lorsqu’il s’agit de Data. 

Votre rigueur et votre dynamisme constitueront des atouts clés dans la réussite des missions qui vous seront confiées. 

See more jobs at Showroomprive.com

Apply for this job

2d

Data Engineer

StatusRemote (Worldwide)
airflowsqldockerlinuxpython

Status is hiring a Remote Data Engineer

About Status

Status is building the tools and infrastructure for the advancement of a secure, private, and open web3. 

With the high level goals of preserving the right to privacy, mitigating the risk of censorship, and promoting economic trade in a transparent, open manner, Status is building a community where anyone is welcome to join and contribute.

As an organization, Status seeks to push the web3 ecosystem forward through research, creation of developer tools, and support of the open source community. 

As a product, Status is an open source, Ethereum-based app that gives users the power to chat, transact, and access a revolutionary world of Apps on the decentralized web. But Status is also building foundational infrastructure for the whole Ethereum ecosystem, including the Nimbus ETH 1.0 and 2.0 clients, the Keycard hardware wallet, and the Waku messaging protocol, the p2p communication layer for Web3.

As a team, Status has been completely distributed since inception. Our team is currently 200+ core contributors strong, and welcomes a growing number of community members from all walks of life, scattered all around the globe. 

We care deeply about open source, and our organizational structure has minimal hierarchy and no fixed work hours. We believe in working with a high degree of autonomy while supporting the organization's priorities.

About the Infrastructure Team

We’re a team scattered across the world, working to provide various tools and services for the projects in the company. We work asynchronously, with a high level of independence.

We are seeking a Data Engineer to construct and maintain dynamic dashboards for our Open Source projects. The successful candidate will collect, analyze, and interpret data to provide actionable insights, enabling us to effectively track and improve our project progress.

 

Key responsibilities

  • Develop and implement data pipelines for Open Source Project development, communication campaigns, and finance overview
  • Build visualization tools to track and analyze project progress, communication effectiveness, and financial health
  • Accompany teams lead to identify key elements for KPI analysis
  • Manage Data Warehouse to maintain data quality

You ideally will have 

  • Experience with Data Pipeline implementation (DBT, Airflow, Airbyte)
  • Experience with SQL optimization
  • Experience with Python or other scripting languages
  • Experience with Grafana or other visualization tools
  • Experience in, and passion for, blockchain technology
  • A strong alignment to our principles: https://status.app/manifesto

 

Bonus points if 

  • Comfortable working remotely and asynchronously
  • Experience working for an open source organization.  
  • Experience with Linux, Docker
  • Experience with LLM fine-tuning

[Don’t worry if you don’t meet all of these criteria, we’d still love to hear from you anyway if you think you’d be a great fit for this role. Just explain to us why in your cover letter]

 

Hiring Process 

  1. Interview with People Ops team
  2. Technical Task
  3. Interview with BI Team Lead
  4. Interview with Infra Team Lead

Compensation

We are happy to pay in any mix of fiat/crypto.

See more jobs at Status

Apply for this job

4d

Senior Data Engineer - Pacific or Central Time Only

ExperianCosta Mesa, CA, Remote
S32 years of experienceagile5 years of experience3 years of experiencetableauairflowsqlapipythonAWS

Experian is hiring a Remote Senior Data Engineer - Pacific or Central Time Only

Job Description

The Senior Data Engineer reports to the Data Engineer Manager and designs, develops and supports ETL data pipeline solutions in the AWS environment.

  • You will help build a semantic layer by developing ETL and virtualized views.
  • Collaborate with engineering teams to discover and use new data that is being introduced into the environment.
  • Work as part of a team to build and support a data warehouse and implement solutions using Python to process structured and unstructured data.
  • Support existing ETL processes written in SQL, troubleshoot and resolve production issues.
  • You will create report specifications and process documentation for the required data deliverables.
  • Be a liaison between business and technical teams to achieve project goals, delivering cross-functional reporting solutions.
  • Troubleshoot and resolve data, system, and performance issues.
  • Communicate with partners, other technical teams, and management to collect requirements, articulate data deliverables, and provide technical designs.
  • You have Experience providing engineering support to the customer support team to resolve any critical customer issues in an Agile environment.

Qualifications

  • Experience communicating updates and resolutions to customers and other partners since, as a Data Engineer, you will collaborate with partners and technical teams.
  • Minimum 5 years of experience as an ETL Data Engineer, and has intermediate knowledge working with SQL and data Experience approaching a problem from different angles, analyzing pros and cons of different solutions
  • Minimum 5 years of experience in Python scripting
  • Minimum 2 years of experience with AWS data ecosystem (Redshift, EMR, S3, MWAA, etc.)
  • Minimum 3 years of experience working in an Agile environment.
  • Experience with Tableau is a plus.
  • Experience with DBT.
  • Hands-on experience with Apache Airflow or equivalent tools (AWS MWAA) for the orchestration of data pipelines.
  • Hands-on experience working and building with Python API-based data pipelines.

See more jobs at Experian

Apply for this job

4d

Sr. SQL Data Analyst

ExperianHeredia, Costa Rica, Remote
Bachelor's degreetableauairflowsqlDesignpythonAWS

Experian is hiring a Remote Sr. SQL Data Analyst

Job Description

Role Summary

Reporting directly to the Business Analyst Director for ECS, you will:

Primary Responsibilities Include:

  • Develop SQL queries to analyze our data to support ECS products and services
  • Perform peer review and SQL query optimization to enhance execution performance and data quality
  • Design and implement data architecture models supporting Business Operations data schemas
  • Create data pipelines to automate key business functions and ensure the delivery of critical data for operational oversight
  • You will design ETL processes to gather and prepare data from different sources for reporting
  • Perform data analysis to support business initiatives and to provide analytical insights that drive strategic decision-making
  • Detect trends and patterns in operational data to help resolve operational issues
  • Identify key performance indicators (KPIs) and track business metrics to monitor operational performance
  • Communicate data analysis findings to all kinds of stakeholders
  • Develop, and present Tableau dashboards to highlight business insights to cross-functional audiences including senior leadership
  • Create technical documentation and diagrams including data and process flow diagrams, ER diagrams.

Qualifications

Required Qualifications:

  • Bachelor's degree in quantitative field or related coursework and experience
  • 5+ years of hands-on SQL development
  • 3+ years of Data visualization experience
  • Proficiency profiling data, including data discovery, cleansing, and analysis

Preferred Qualifications:

  • Experience using Apache Airflow
  • Experience with GitHub
  • Working knowledge of Alteryx
  • Experience writing SQL in AWS RedShift
  • Working knowledge of Tableau or other data visualization tools
  • Scripting experience in Python, R, or similar language

See more jobs at Experian

Apply for this job

4d

Back End Developer with C# (Remote)

Loginsoft Consulting LLCSilver Spring, MD - Remote
RustjiraairflowsqlDesigngitc++jenkinspython

Loginsoft Consulting LLC is hiring a Remote Back End Developer with C# (Remote)

NOTE: THIS POSITION IS TO JOIN AS W2 ONLY.

Back End Developer with C#

Location: Silver Spring, MD (Remote)

Duration: 12 Months

Project : We are actively seeking a motivated Back-end Developer to enhance our dynamic team. The ideal candidate is a team player who works well with others but is also able to work independently. You excel at creating and improving scalable services in line with innovations. As a Back-End Developer, your role could include building services, REST APIs, and state aware workflows largely in cloud environments. We love collaborating with teammates on new app designs, partnering with front-end developers and video engineers to combine work into a cohesive product, and improving existing services.

Job Responsibilities:

  • Develop and maintain applications using cloud architecture working in Jira and Git
  • Ability to build from user stories and be active in technical design
  • Build efficient, scalable, secure, and observable applications
  • Able to take and give feedback in team meetings and code review
  • Build and maintain highly scalable services and workflows
  • Be proactive and responsive in addressing technical issues

Qualifications:

  • Minimum of 5 years' experience working as a Software Developer
  • Capable of self-deployments in a well-defined container pipeline
  • Firm understanding of cloud infrastructure Solid experience with one or more of the following: Node, C#, Rust, Python
  • Understanding of SQL and document database development
  • Experience with workflow engines (SDVI Rally, Airflow, Step Functions, Argo Workflows, Jenkins, etc.)
  • Understanding of fundamental design principles behind a scalable application

Must Have Skills / Requirements:

  • 1) Aptitude/Communication skills
  • a. Able to work hard independently and with others collaboratively
  • 2) Experience with CICD
  • a. Continuous integration and continuous deployment experience.
  • Years experience: Minimum of 5 years' experience working as a Software Developer

Required background/ Skillsets:

  • Software Developing background.
  • Background in media

Soft Skills

  • Communication
  • Develop and maintain applications using cloud architecture working in Jira and Git
  • Ability to build from user stories and be active in technical design
  • Build efficient, scalable, secure, and observable applications
  • Able to take and give feedback in team meetings and code review
  • Build and maintain highly scalable services and workflows
  • Be proactive and responsive in addressing technical issues

See more jobs at Loginsoft Consulting LLC

Apply for this job

7d

Front End Developer (Remote)

Loginsoft Consulting LLCSilver Spring, MD - Remote
RustjiraairflowDesignuihtml5gitjavac++jenkinspythonNode.js

Loginsoft Consulting LLC is hiring a Remote Front End Developer (Remote)

NOTE: THIS POSITION IS TO JOIN AS W2 ONLY.

Front End Developer

Location: Silver Spring, MD (Remote)

Duration: 12 Months

Rate: $30/hr

Project:We are actively seeking a motivated Front-end Developer to enhance our dynamic team. The ideal candidate is a team player who works well with others but is also able to work independently. You excel at creating and improving scalable services in line with innovations. Your role could include building services, REST APIs, and state aware workflows largely in cloud environments. We love collaborating with teammates on new app designs, partnering with team members to combine work into a cohesive product, and improving existing services.

Job Responsibilities:

  • Ability to build from user stories and be active in technical design
  • Able to take and give feedback in team meetings and code review
  • Be proactive and responsive in addressing technical issues

Technology requirements:

  • Solid experience with JavaScript/Typescript and Node.js in conjunction with at least one of the following: React.js, Vue.js, or similar frameworks.
  • Strong understanding of front-end technologies, including HTML5, CSS3, JS, responsive UIs.
  • Main language is C++
  • Added bonus – Rust/ C# and java script experience
  • Firm understanding of cloud infrastructure Solid experience with one or more of the following: C++, C#, Rust, Python
  • Experience with workflow engines (SDVI Rally, Airflow, Step Functions, Argo Workflows, Jenkins, etc.)
  • Understanding of fundamental design principles behind a scalable application

Required background/ Skillsets:

  • 1-3 years' experience working as a Software Developer
  • Understanding of fundamental design principles behind a scalable application.
  • Knowledge of best practices and standards for web development.
  • Good understanding of asynchronous request handling, and partial page updates.
  • Familiarity with RESTful APIs
  • UX/UI experience
  • Gethub experience
  • Cant just be fullstack developers need to have UI experience

Nice-to-Haves:

  • Media knowledge background
  • Designing UI not just building

Soft Skills:

  • Communication
  • Develop and maintain applications using cloud architecture working in Jira and Git
  • Ability to build from user stories and be active in technical design
  • Able to take and give feedback in team meetings and code review
  • Be proactive and responsive in addressing technical issues

See more jobs at Loginsoft Consulting LLC

Apply for this job

8d

Data Driven | Data Engineer

DevoteamLisboa, Portugal, Remote
Master’s Degree3 years of experienceairflowsqlazurepythonAWS

Devoteam is hiring a Remote Data Driven | Data Engineer

Job Description

We are currently looking for a Data Engineer to work with us.

Qualifications

  • Bachelor’s or Master’s degree in IT or equivalent;
  • At least 3 years of experience as a Data Engineer;
  • High level of experience with the following programing languages: Python and SQL;
  • Working experience with AWS or Azure;
  • Proficient Level of English (spoken and written);
  • Good communication skills;
  • Knowledge in Airflow will be a plus.

 

See more jobs at Devoteam

Apply for this job

8d

Data Driven | Python Developer

DevoteamLisboa, Portugal, Remote
Master’s Degree3 years of experienceairflowsqlazurepythonAWS

Devoteam is hiring a Remote Data Driven | Python Developer

Job Description

We are currently looking for a Data Engineer to work with us.

Qualifications

  • Bachelor’s or Master’s degree in IT or equivalent;
  • At least 3 years of experience as a Data Engineer;
  • High level of experience with the following programing languages: Python and SQL;
  • Working experience with AWS or Azure;
  • Proficient Level of English (spoken and written);
  • Good communication skills;
  • Knowledge in Airflow will be a plus.

 

See more jobs at Devoteam

Apply for this job

9d

Lead, Data Engineer (Client Deployment) (United States)

DemystDataUnited States, Remote
remote-firstairflowDesign

DemystData is hiring a Remote Lead, Data Engineer (Client Deployment) (United States)

OUR SOLUTION

At Demyst, we're transforming the way enterprises manage data, eliminating key challenges and driving significant improvements in business outcomes through data workflow automation. Due to growing demand, we're expanding our team and seeking talented individuals to help us scale.

Our platform simplifies workflows, eliminating the need for complicated platforms and expensive consultants. With top-tier security and global reach, we're helping businesses in banking and insurance achieve digital transformation. If you're passionate about data and affecting change, Demyst is the place for you.

THE CHALLENGE

Demyst is seeking a Lead Engineer with a strong data engineering focus to play a pivotal role in delivering our next-generation data platform to leading enterprises across North America. In this role, you will lead a team of data engineers with a primary focus on data integration and solution deployment. You will oversee the development and management of data pipelines, ensuring they are robust, scalable, and reliable. This is an ideal opportunity for a hands-on data engineering leader to apply technical, leadership, and problem-solving skills to deliver high-quality solutions for our clients.

Your role will involve not only technical leadership and mentoring but also actively contributing to coding, architectural decisions, and data engineering strategy. You will guide your team through complex client deployments, from planning to execution, ensuring that data solutions are effectively integrated and aligned with client goals.

Demyst is a remote-first company. The candidate must be based in the United States.

RESPONSIBILITIES

  • Lead the configuration, deployment, and maintenance of data solutions on the Demyst platform to support client use cases.
  • Supervise and mentor the local and distributed data engineering team, ensuring best practices in data architecture, pipeline development, and deployment.
  • Recruit, train, and evaluate technical talent, fostering a high-performing, collaborative team culture.
  • Contribute hands-on to coding, code reviews, and technical decision-making, ensuring scalability and performance.
  • Design, build, and optimize data pipelines, leveraging tools like Apache Airflow, to automate workflows and manage large datasets effectively.
  • Work closely with clients to advise on data engineering best practices, including data cleansing, transformation, and storage strategies.
  • Implement solutions for data ingestion from various sources, ensuring the consistency, accuracy, and availability of data.
  • Lead critical client projects, managing engineering resources, project timelines, and client engagement.
  • Provide technical guidance and support for complex enterprise data integrations with third-party systems (e.g., AI platforms, data providers, decision engines).
  • Ensure compliance with data governance and security protocols when handling sensitive client data.
  • Develop and maintain documentation for solutions and business processes related to data engineering workflows.
  • Other duties as required.
  • Bachelor's degree or higher in Computer Science, Data Engineering, or related fields. Equivalent work experience is also highly valued.
  • 5-10 years of experience in data engineering, software engineering, or client deployment roles, with at least 3 years in a leadership capacity.
  • Strong leadership skills, including the ability to mentor and motivate a team, lead through change, and drive outcomes.
  • Expertise in designing, building, and optimizing ETL/ELT data pipelines using Python, JavaScript, Golang, Scala, or similar languages.
  • Experience in managing large-scale data processing environments, including Databricks and Spark.
  • Proven experience with Apache Airflow to orchestrate data pipelines and manage workflow automation.
  • Deep knowledge of cloud services, particularly AWS (EC2/ECS, Lambda, S3), and their role in data engineering.
  • Hands-on experience with both SQL and NoSQL databases, with a deep understanding of data modeling and architecture.
  • Strong ability to collaborate with clients and cross-functional teams, delivering technical solutions that meet business needs.
  • Proven experience in unit testing, integration testing, and engineering best practices to ensure high-quality code.
  • Familiarity with agile project management tools (JIRA, Confluence, etc.) and methodologies.
  • Experience with data visualization and analytics tools such as Jupyter Lab, Metabase, Tableau.
  • Strong communicator and problem solver, comfortable working in distributed teams.
  • Operate at the forefront of the data management innoivation, and work with the largest industry players in an emerging field that is fueling growth and technological advancement globally
  • Have an outsized impact in a rapidly growing team, offering real autonomy and responsibility for client outcomes
  • Stretch yourself to help define and support something entirely new
  • Distributed team and culture, with fully flexible working hours and location
  • Collaborative, inclusive, and dynamic culture
  • Generous benefits and compensation plans
  • ESOP awards available for tenured staff
  • Join an established, and scaling data technology business

Demyst is committed to creating a diverse, rewarding career environment and is proud to be an equal opportunity employer. We strongly encourage individuals from all walks of life to apply.

See more jobs at DemystData

Apply for this job

10d

Lead Data Analyst, Product

tableauairflowsqlDesignc++python

hims & hers is hiring a Remote Lead Data Analyst, Product

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

​​About the Role:

As a Manager of Product Analytics, you and your team will shape the customer experience through high-quality experimental design and hypothesis testing. You will work cross-functionally with product managers, growth leads, designers, and engineers in a fast-paced collaborative environment. Your knowledge of A/B testing and digital analytics combined with your background in experimental design will allow Hims and Hers to build best-in-class customer experiences. This position will report to the Senior Manager of Product Analytics.

You Will:

  • Design experiments and provide actionable and scalable recommendations from the results
  • Deliver in-depth analyses that are statistically sound and easily understood by non-technical audiences
  • Work with your team to curate the experimentation roadmap for the product and growth teams
  • Enable data self-service by designing templates that are easy to understand using relevant KPIs
  • Collaborate across analytics, engineering, and growth teams to improve the customer experience
  • Distill your knowledge of tests into playbooks that can be implemented and utilized to help us transform our digital experience
  • Identify causal relationships in our data using advanced statistical modeling
  • Segment users based on demographic, behavioral, and psychographic attributes to tailor product experiences and lifecycle communications
  • Align analytics initiatives with broad business objectives to build long-term value
  • Conduct deep-dive analyses to answer specific business questions and provide actionable recommendations to product and growth team

You Have:

  • 8+ years of analytics experience
  • 5+ years of experience in A/B testing
  • Experience working with subscription metrics
  • A strong work ethic and the drive to learn more and understand a problem in detail
  • Strong organizational skills with an aptitude to manage long-term projects from end to end
  • Expert SQL skills
  • Extensive experience working with data engineering teams and production data pipelines
  • Experience programming in Python, SAS, or R 
  • Experience in data modeling and statistics with a strong knowledge of experimental design and statistical inference 
  • Development and training of predictive models
  • Advanced knowledge of data visualization and BI in Looker or Tableau
  • Ability to explain technical analyses to non-technical audience

A Big Plus If You Have:

  • Advanced degree in Statistics, Mathematics, or a related field
  • Experience with price testing and modeling price elasticity
  • Experience with telehealth concepts
  • Project management experience 
  • DBT, airflow, and Databricks experience

Our Benefits (there are more but here are some highlights):

  • Competitive salary & equity compensation for full-time roles
  • Unlimited PTO, company holidays, and quarterly mental health days
  • Comprehensive health benefits including medical, dental & vision, and parental leave
  • Employee Stock Purchase Program (ESPP)
  • Employee discounts on hims & hers & Apostrophe online products
  • 401k benefits with employer matching contribution
  • Offsite team retreats

#LI-Remote

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range for US-based employees is
$160,000$190,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims considers all qualified applicants for employment, including applicants with arrest or conviction records, in accordance with the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance, the California Fair Chance Act, and any similar state or local fair chance laws.

Hims & Hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, please contact us at accommodations@forhims.com and describe the needed accommodation. Your privacy is important to us, and any information you share will only be used for the legitimate purpose of considering your request for accommodation. Hims & Hers gives consideration to all qualified applicants without regard to any protected status, including disability. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job

10d

Senior Analytics Engineer, MarTech

CLEAR - CorporateNew York, New York, United States (Hybrid)
tableauairflowsqlDesignjenkinspythonAWS

CLEAR - Corporate is hiring a Remote Senior Analytics Engineer, MarTech

Today, CLEAR is well-known as a leader in digital and biometric identification, reducing friction for our members wherever an ID check is needed. We’re looking for an experienced Senior Analytics Engineer to help us build the next generation of products which will go beyond just ID and enable our members to leverage the power of a networked digital identity. As a Senior Analytics Engineer at CLEAR, you will participate in the design, implementation of our MarTech products, leveraging your expertise to drive technical innovation and insure seamless integration of marketing technologies.


A brief highlight of our tech stack:

  • SQL / Python / Looker / Snowflake / Airflow / Databricks / Spark / dbt

What you'll do:

  • Build a scalable data system in which Analysts and Engineers can self-service changes in an automated, tested, secure, and high-quality manner 
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • Develop and maintain data pipelines to collect, clean, and transform data. Owning end to end data product from ingestion to visualization
  • Develop and implement data analytics models
  • Partner with product and other stakeholders to uncover requirements, to innovate, and to solve complex problems
  • Have a strong sense of ownership, responsible for architectural decision-making and striving for continuous improvement in technology and processes at CLEAR

 What you're great at:

  • 6+ years of data engineering experience
  • Working with cloud-based application development, and be fluent in at least a few of: 
    • Cloud services providers like AWS
    • Data pipeline orchestration tools like Airflow, Dagster, Luigi, etc
    • Big data tools like Spark, Kafka, Snowflake, Databricks, etc
    • Collaboration, integration, and deployment tools like Github, Argo, and Jenkins 
    • Data visualization tool like Looker, Tableau, etc
  • Articulating technical concepts to a mixed audience of technical and non-technical stakeholders
  • Collaborating and mentoring less experienced members of the team
  • Comfort with ambiguity 
  • Curiosity about technology, believe in constant learning, and ability to be autonomous to figure out what's important

How You'll be Rewarded:

At CLEAR we help YOU move forward - because when you’re at your best, we’re at our best. You’ll work with talented team members who are motivated by our mission of making experiences safer and easier. Our hybrid work environment provides flexibility. In our offices, you’ll enjoy benefits like meals and snacks. We invest in your well-being and learning & development with our stipend and reimbursement programs. 

We offer holistic total rewards, including comprehensive healthcare plans, family building benefits (fertility and adoption/surrogacy support), flexible time off, free OneMedical memberships for you and your dependents, and a 401(k) retirement plan with employer match. The base salary range for this role is $175,000 - $215,000, depending on levels of skills and experience.

The base salary range represents the low and high end of CLEAR’s salary range for this position. Salaries will vary depending on various factors which include, but are not limited to location, education, skills, experience and performance. The range listed is just one component of CLEAR’s total compensation package for employees and other rewards may include annual bonuses, commission, Restricted Stock Units.

About CLEAR

Have you ever had that green-light feeling? When you hit every green light and the day just feels like magic. CLEAR's mission is to create frictionless experiences where every day has that feeling. With more than 25+ million passionate members and hundreds of partners around the world, CLEAR’s identity platform is transforming the way people live, work, and travel. Whether it’s at the airport, stadium, or right on your phone, CLEAR connects you to the things that make you, you - unlocking easier, more secure, and more seamless experiences - making them all feel like magic.

CLEAR provides reasonable accommodation to qualified individuals with disabilities or protected needs. Please let us know if you require a reasonable accommodation to apply for a job or perform your job. Examples of reasonable accommodation include, but are not limited to, time off, extra breaks, making a change to the application process or work procedures, policy exceptions, providing documents in an alternative format, live captioning or using a sign language interpreter, or using specialized equipment.

See more jobs at CLEAR - Corporate

Apply for this job

11d

Engineering Manager, Data Platform

GrammarlySan Francisco; Hybrid
MLremote-firstairflowDesignazurec++AWS

Grammarly is hiring a Remote Engineering Manager, Data Platform

Grammarly is excited to offer aremote-first hybrid working model. Grammarly team members in this role must be based in San Francisco. They must meet in person for collaboration weeks, traveling if necessary to the hub(s) where their team is based.

This flexible approach gives team members the best of both worlds: plenty of focus time along with in-person collaboration that fosters trust and unlocks creativity.

About Grammarly

Grammarly is the world’s leading AI writing assistance company trusted by over 30 million people and 70,000 teams. From instantly creating a first draft to perfecting every message, Grammarly helps people at 96% of theFortune 500 and teams at companies like Atlassian, Databricks, and Zoom get their point across—and get results—with best-in-class security practices that keep data private and protected. Founded in 2009, Grammarly is No. 14 on the Forbes Cloud 100, one of TIME’s 100 Most Influential Companies, one of Fast Company’s Most Innovative Companies in AI, and one of Inc.’s Best Workplaces.

The Opportunity

To achieve our ambitious goals, we’re looking for a Software Engineer to join our Data Platform team and help us build a world-class data platform. Grammarly’s success depends on its ability to efficiently ingest over 60 billion daily events while using our systems to improve our product. This role is a unique opportunity to experience all aspects of building complex software systems: contributing to the strategy, defining the architecture, and building and shipping to production.

Grammarly’s engineers and researchers have the freedom to innovate and uncover breakthroughs—and, in turn, influence our product roadmap. The complexity of our technical challenges is growing rapidly as we scale our interfaces, algorithms, and infrastructure. You can hear more from our team on our technical blog.

We are seeking a highly skilled and experienced Manager for our Data Platform team to achieve our ambitious objectives. This role is crucial in managing and evolving our data infrastructure, engineering, and governance processes to support modern machine learning (ML) use cases, self-serve analytics, and data policy management across the organization. The ideal candidate will possess strong technical expertise, exceptional leadership abilities, and the capability to mentor and develop a high-performing team that operates across data infrastructure, engineering, and governance.

This person will be integral to the larger data organization, reporting directly to the Director of Data Platform. They will have the opportunity to influence decisions and the direction of our overall data platform, including data processing, infrastructure, data governance, and analytics engineering. Grammarly’s engineers and researchers have the freedom to innovate and uncover breakthroughs—and, in turn, influence our product roadmap. The complexity of our technical challenges is growing rapidly as we scale our interfaces, algorithms, and infrastructure.

As the Data Platform team manager, you will lead and mentor a team of data engineers, infrastructure engineers, and data governance specialists, fostering a collaborative and innovative environment focused on professional growth. You will oversee the design, implementation, and maintenance of secure, scalable, and optimized data platforms, ensuring high performance and reliability. Your role includes developing and executing strategic roadmaps aligned with business objectives and collaborating closely with cross-functional teams and the larger data organization to ensure seamless data integration, governance, and access. Additionally, you will provide technical leadership and play a pivotal role in resource management and recruiting efforts, driving the team’s success and aligning with the organization’s long-term data strategy.

In this role, you will:

  • Build a highly specialized data platform team to support the growing needs and complexity of our product, business, and ML organizations.
  • Oversee the design, implementation, and maintenance of a robust data infrastructure, ensuring high availability and reliability across ingestion, processing, and storage layers.
  • Lead the development of frameworks and tooling that enable self-serve analytics, policy management, and seamless data governance across the organization.
  • Ensure data is collected, transformed, and stored efficiently to support real-time, batch processing, and machine learning needs.
  • Act as a liaison between the Data Platform team and the broader organization, ensuring seamless communication, collaboration, and alignment with global data strategies.
  • Drive cross-functional meetings and initiatives to represent the Data Platform team’s interests and contribute to the organization’s overall data strategy, ensuring ML and analytics use cases are adequately supported.
  • Drive the evaluation, selection, and implementation of new technologies and tools that enhance the team’s capabilities and improve the organization’s overall data infrastructure and governance processes.
  • Implement and enforce data governance policies and practices to ensure data quality, privacy, security, and compliance with organizational standards.
  • Collaborate with stakeholders to define and refine data governance policies that align with business objectives and facilitate discoverability and accessibility of high-quality data.
  • Monitor and assess the data platform's performance to identify areas for optimization, cost management, and continuous improvement.
  • Foster a collaborative and high-performance culture within the team, emphasizing ownership and innovation.
  • Cultivate an ownership mindset and culture across product and platform teams by providing necessary metrics to drive informed decisions and continuous improvement.
  • Set high performance and quality standards, coaching team members to meet them, and mentoring and growing junior and senior IC talent.

Qualifications:

  • 7+ years of experience in data engineering, infrastructure & governance, with at least 2-3 years in a leadership or management role.
  • Proven experience in building and managing large-scale data platforms, including data ingestion pipelines and infrastructure.
  • Experience with cloud platforms and data ecosystems such as AWS, GCP, Azure, and Databricks.
  • Familiarity with modern data engineering and orchestration tools and frameworks (e.g., Apache Kafka, Airflow, DBT, Spark).
  • Strong understanding of data governance frameworks, policy management, and self-serve analytics platforms.
  • Excellent leadership and people management skills, with a track record of mentoring and developing high-performing teams.
  • Experience working with geographically distributed teams and aligning with global data and governance strategies.
  • Strong problem-solving skills, with the ability to navigate and resolve complex technical challenges.
  • Excellent communication and collaboration skills, with the ability to work effectively with stakeholders across different locations and time zones.
  • Proven ability to operate in a fast-paced, dynamic environment where things change quickly.
  • Leads by setting well-understood goals and sharing the appropriate level of context for maximum autonomy, but is also profoundly technical and can dive in to help when necessary.
  • Embodies our EAGER values—ethical, adaptable, gritty, empathetic, and remarkable.
  • Is inspired by our MOVE principles: move fast and learn faster; obsess about creating customer value; value impact over activity; and embrace healthy disagreement rooted in trust.
  • Willingness to meet in person for scheduled team collaboration weeks and travel, if necessary, to the hub where the team is based.

Compensation and Benefits

Grammarly offers all team members competitive pay along with a benefits package encompassing the following and more: 

  • Excellent health care (including a wide range of medical, dental, vision, mental health, and fertility benefits)
  • Disability and life insurance options
  • 401(k) and RRSP matching 
  • Paid parental leave
  • 20 days of paid time off per year, 12 days of paid holidays per year, two floating holidays per year, and unlimited sick days 
  • Generous stipends (including those for caregiving, pet care, wellness, your home office, and more)
  • Annual professional development budget and opportunities

Grammarly takes a market-based approach to compensation, which means base pay may vary depending on your location. Our US locations are categorized into two compensation zones based on proximity to our hub locations.

Base pay may vary considerably depending on job-related knowledge, skills, and experience. The expected salary ranges for this position are outlined below by compensation zone and may be modified in the future. 

San Francisco: 
Zone 1: $285,000 – $325,000/year (USD)

For more information about our compensation zones and locations where we currently support employment, please refer to this page. If a location of interest is not listed, please speak with a recruiter for additional information. 

We encourage you to apply

At Grammarly, we value our differences, and we encourage all to apply. Grammarly is an equal opportunity company. We do not discriminate on the basis of race or ethnic origin, religion or belief, gender, disability, sexual identity, or age.

For more details about the personal data Grammarly collects during the recruitment process, for what purposes, and how you can address your rights, please see the Grammarly Data Privacy Notice for Candidates here

#LI-Hybrid

 

Apply for this job

11d

Sr. Database Engineer - SQL

ExperianHeredia, Costa Rica, Remote
Bachelor's degreetableauairflowsqlDesignpythonAWS

Experian is hiring a Remote Sr. Database Engineer - SQL

Job Description

Role Summary

Reporting directly to the Business Analyst Director for ECS, you will:

Primary Responsibilities Include:

  • Develop SQL queries to analyze our data to support ECS products and services
  • Perform peer review and SQL query optimization to enhance execution performance and data quality
  • Design and implement data architecture models supporting Business Operations data schemas
  • Create data pipelines to automate key business functions and ensure the delivery of critical data for operational oversight
  • You will design ETL processes to gather and prepare data from different sources for reporting
  • Perform data analysis to support business initiatives and to provide analytical insights that drive strategic decision-making
  • Detect trends and patterns in operational data to help resolve operational issues
  • Identify key performance indicators (KPIs) and track business metrics to monitor operational performance
  • Communicate data analysis findings to all kinds of stakeholders
  • Develop, and present Tableau dashboards to highlight business insights to cross-functional audiences including senior leadership
  • Create technical documentation and diagrams including data and process flow diagrams, ER diagrams.

Qualifications

Required Qualifications:

  • Bachelor's degree in quantitative field or related coursework and experience
  • 5+ years of hands-on SQL development
  • 3+ years of Data visualization experience
  • Proficiency profiling data, including data discovery, cleansing, and analysis

Preferred Qualifications:

  • Experience using Apache Airflow
  • Experience with GitHub
  • Working knowledge of Alteryx
  • Experience writing SQL in AWS RedShift
  • Working knowledge of Tableau or other data visualization tools
  • Scripting experience in Python, R, or similar language

See more jobs at Experian

Apply for this job

14d

Senior Architect, AI/ML, Field CTO Office

snowflakecomputingRemote New York City, NY, USA
MLSalesairflowsqlDesignpython

snowflakecomputing is hiring a Remote Senior Architect, AI/ML, Field CTO Office

Build the future of data. Join the Snowflake team.

We’re at the forefront of the data revolution, committed to building the world’s greatest data and applications platform. Our ‘get it done’ culture allows everyone at Snowflake to have an equal opportunity to innovate on new ideas, create work with a lasting impact, and excel in a culture of collaboration.

Our Industries Sales Engineering organization is seeking a Senior Architect, Machine Learning, Industries Field CTO focused on Machine Learning and Artificial Intelligence to join our team who can provide leadership in working with both technical and business executives in the design and architecture of the Snowflake Cloud Data Platform as a critical component of their enterprise data architecture and overall machine learning ecosystem.

In this role you will work with sales teams, product management, and technology partners to leverage best practices and reference architectures highlighting Snowflake’s Cloud Data Platform as a core technology enabling platform for the emerging Data Science workload throughout an organization.

As a Senior Architect focused on AI/ ML you must share our passion and vision in helping our customers and partners drive faster time to insight through Snowflake’s Cloud Data Platform, thrive in a dynamic environment, and have the flexibility and willingness to jump in and get things done. You are equally comfortable in both a business and technical context, interacting with executives and talking shop with technical audiences.

IN THIS ROLE YOU WILL GET TO:

  • Apply your data science architecture expertise while presenting Snowflake technology and vision to executives and technical contributors at strategic prospects, customers, and partners
  • Work with our sales teams, product management, and partners to drive innovation in our Cloud Data Platform and adoption of Snowflakes core ML and GenAI technologies
  • Partner with sales team and channel partners to understand the needs of our customers,  strategize on how to navigate and accelerate winning sales cycles, provide compelling value-based enterprise architecture deliverables and working sessions to ensure customers are set up for success operationally, and support strategic enterprise pilots / proof-of-concepts 
  • Collaborate closely with our Product team to effectively influence the Snowflake’s product roadmaps based on field team and customer feedback
  • Partner with Product Marketing teams to spread awareness and support pipeline building via customer roundtables,  conferences, events, blogs, webinars, and whitepapers

ON DAY ONE, WE WILL EXPECT YOU TO HAVE:

  • 5+ years of experience building and deploying machine learning solutions in the cloud 
  • Deep technical hands on expertise within Data Science tools and ecosystem 
  • 2+ years of working with cloud native ML tools 
  • Outstanding presentation skills to both technical and executive audiences, whether impromptu on a whiteboard or using presentations and demos
  • Working knowledge with data engineering technologies and tools (dbt, Airflow, etc)
  • Expert level knowledge of Data Science and ML fundamentals
  • Working knowledge of deep learning concepts, techniques, and tools (Pytorch, Tensorflow, etc). 
  • 2+ years experience building and deploying ML and data engineering applications and solutions on Spark
  • Expert level knowledge of Python and popular third party packages(Pandas, Numpy, Tensorflow, sklearn, Pytorch, etc)
  • Working knowledge of SQL
  • Introductory familiarity with LLM developer tools like langchain or LlamaIUndex 
  • Industry Focus a plus (Financial Services, Healthcare & Lifesciences, Media, Retail / CPG, Manufacturing, Insurance, Technology and Telecom, or Federal)
  • Bachelor’s Degree required, Masters Degree in computer science, engineering, mathematics or related fields, or equivalent experience preferred.

Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. 

How do you want to make your impact?

Every Snowflake employee is expected to follow the company’s confidentiality and security standards for handling sensitive data. Snowflake employees must abide by the company’s data security plan as an essential part of their duties. It is every employee's duty to keep customer information secure and confidential.

See more jobs at snowflakecomputing

Apply for this job

14d

Senior Principal Architect - Cloud Engineering

IFSBengaluru, India, Remote
gRPCgolangagileairfloworacleDesignmobileazuregraphqljavac++.netdockerpostgresqlkubernetesangularjenkinspythonjavascript

IFS is hiring a Remote Senior Principal Architect - Cloud Engineering

Job Description

The Senior Principal Architect (“SPA”) will own the overall architecture accountability for one or more portfolios within IFS Technology. The role of the SPA is to build and develop the technology strategy, while growing, leading, and energising multi-faceted technical teams to design and deliver technical solutions that deliver IFS technology needs and are supported by excellent data, methodology, systems and processes. The role will work with a broad set of stakeholders including product managers, engineers, and various R&D and business leaders. The occupant of this role diagnoses and solves significant, complex and non-routine problems; translates practices from other markets, countries and industries; provides authoritative, technical recommendations which have a significant impact on business performance in short and medium term; and contributes to company standards and procedures, including the IFS Technical Reference Architecture. This role actively identifies new approaches that enhance and simplify where possible complexities in the IFS suite. The SPA represents IFS as the authority in one or technology areas or portfolios and acts as a role model to develop experts within this area.

What is the role?

  • Build, nurture and grow high performance engineering teams using Agile Engineering principles.
  • Provide technical leadership for design and development of software meeting functional & nonfunctional requirements.
  • Provide multi-horizon technology thinking to broad portfolios and platforms in line with desired business needs.
  • Adopt a hands-on approach to develop the architecture runway for teams.
  • Set technical agenda closely with the Product and Program Managers
  • Ensure maintainability, security and performance in software components developed using well-established engineering/architectural principles.
  • Ensure software quality complying with shift left quality principles.  
  • Conduct peer reviews & provide feedback ensuring quality standards.
  • Engage with requirement owners and liaise with other stakeholders.
  • Contribute to improvements in IFS products & services.

Qualifications

What do we need from you? 

It’s your excellent influencing and communication skills that will really make the difference. Entrepreneurship and resilience will be required, to help drive and shape the technology strategy. You will need technical, operational, and commercial breadth to deliver a strategic technical vision alongside a robust, secure and cost-effective delivery platform and operational model.

  • Seasoned Leader with 15+ years of hands-on experience in Design, Development and Implementation of scalable cloud-based web and mobile applications.
  • Have strong software architectural, technical design and programming skills.
  • Experience in Application Security, Scalability and Performance.
  • Ability to envision the big picture and work on details. 
  • Can articulate technology vision and delivery strategy in a way that is understandable to technical and non-technical audiences.
  • Willingness to learn and adapt different technologies/work environments.
  • Knowledge of and skilled in various tools, languages, frameworks and cloud technologies with the ability to be hands-on where needed:
    • Programming languages - C++, C#, GoLang, Python, JavaScript and Java
    • JavaScript frameworks - Angular, Node and React JS, etc.,
    • Back-end frameworks - .NET, GoLang, etc.,
    • Middleware – REST, GraphQL, GRPC,
    • Databases - Oracle, Mongo DB, Cassandra, PostgreSQL etc.
    • Azure and Amazon cloud services. Proven experience in building cloud-native apps on either or both cloud platforms
    • Kubernetes and Docker containerization
    • CI/CD tools - Circle CI, GitHub, GitLab, Jenkins, Tekton
  • Hands on experience in OOP concepts and design principles.
  • Good to have:
    • Knowledge of cloud-native big data tools (Hadoop, Spark, Argo, Airflow) and data science frameworks (PyTorch, Scikit-learn, Keras, TensorFlow, NumPy)
    • Exposure to ERP application development is advantageous.
  • Excellent communication and multi-tasking skills along with an innovative mindset.

See more jobs at IFS

Apply for this job

14d

Lead Data Engineer (F/H)

ASINantes, France, Remote
S3agilenosqlairflowsqlazureapijava

ASI is hiring a Remote Lead Data Engineer (F/H)

Description du poste

Avec Simon GRIFFON, responsable de l’équipe Data Nantaise, nous recherchons un Lead Data Engineer pour mettre en place, intégrer, développer et optimiser des solutions de pipeline sur des environnements Cloud et On Premise pour nos projets clients. 

Au sein d'une équipe dédiée, principalement en contexte agile, voici les missions qui pourront vous être confiées : 

  • Participer à la compréhension des besoins métiers et réaliser des ateliers de cadrage avec le client 

  • Participer à la rédaction des spécifications fonctionnelles et techniques des flux 

  • Maîtriser les formats de données structurés et non structurés et savoir les manipuler  

  • Modéliser et mettre en place des systèmes décisionnels   

  • Installer et connecter une solution ETL / ELT à une source de données en tenant compte des contraintes et de l’environnement du client 

  • Concevoir et réaliser un pipeline de transformation et de valorisation des données et ordonnancer son fonctionnement 

  • Veiller à la sécurisation des pipelines de données 

  • Concevoir et réaliser des API utilisant les données valorisées  

  • Définir des plans de tests et d’intégration 

  • Prendre en charge la maintenance évolutive et corrective 

  • Accompagner les juniors dans leur montée en compétences 

 

En fonction de vos compétences et appétences, vous intervenez sur l’une ou plusieurs des technologies suivantes : 

  • L’écosystème data notamment Microsoft Azure 

  • Les langages : SQL, Java 

  • Les bases de données SQL et NoSQL 

  • Stockage cloud: S3, Azure Blob Storage… 

  • Les ETL/ESB et autres outils : Talend, Spark, Kafka NIFI, Matillion, Airflow, Datafactory, Glue... 

 

En rejoignant ASI : 

  • Vous évoluerez au sein d’une entreprise aux modes de fonctionnement internes flexibles garantis par une politique RH attentive (accord télétravail 3J/semaine, accord congé parenthèse…)  

  • Vous intégrerez les différentes communautés expertes d'ASI, pour partager des bonnes pratiques et participer aux actions d'amélioration continue. 

  • Vous évoluerez dans une entreprise bientôt reconnue Société à mission, Team GreenCaring et non GreenWashing porteuse d’une démarche RSE incarnée et animée, depuis plus de 10 ans. (Equipe RSE dédiée, accord forfaits mobilités durables…)  

Qualifications

Issu d’une formation supérieure en informatique, mathématiques ou spécialisé en Big Data, vous avez une expérience minimale de 10 ans en ingénierie des données et d'une expérience opérationnelle réussie dans la construction de pipelines de données structurées et non structurées.  

Le salaire proposé pour ce poste est compris entre 40 000 et 45 000 €, selon l'expérience et les compétences, tout en respectant l'équité salariale au sein de l'équipe. 

Attaché à la qualité de ce que vous réalisez, vous faites preuve de rigueur et d'organisation dans la réalisation de vos activités. 

Doté d'une bonne culture technologique, vous faites régulièrement de la veille pour actualiser vos connaissances. 

Un bon niveau d’anglais, tant à l’écrit qu’à l’oral est recommandé. 

Vous êtes doté d’un véritable esprit d’équipe, et votre leadership vous permet d'animer celle-ci en toute bienveillance et pédagogie pour la faire grandir. 

Désireux d’intégrer une entreprise à votre image, vous vous retrouvez dans nos valeurs de confiance, d’écoute, de plaisir et d’engagement.

 

A compétences égales, ce poste est ouvert aux personnes en situation de handicap.  

See more jobs at ASI

Apply for this job

15d

Senior Data Scientist

redisBachelor's degreeterraformairflowsqlansibledockerkubernetespython

ReCharge Payments is hiring a Remote Senior Data Scientist

Who we are

In a world where acquisition costs are skyrocketing, funding is scarce, and ecommerce merchants are forced to do more with less, the most innovative DTC brands understand that subscription strategy is business strategy.

Recharge is simplifying retention and growth for innovative ecommerce brands. As the #1 subscription platform, Recharge is dedicated to empowering brands to easily set up and manage subscriptions, create dynamic experiences at every customer touchpoint, and continuously evaluate business performance. Powering everything from no-code customer portals, personalized offers, and dynamic bundles, Recharge helps merchants seamlessly manage, grow, and delight their subscribers while reducing operating costs and churn. Today, Recharge powers more than 20,000 merchants serving 100 million subscribers, including brands such as Blueland, Hello Bello, LOLA, Chamberlain Coffee, and Bobbie—Recharge doesn’t just help you sell products, we help build buyer routines that last.

Recharge is recognized on the Technology Fast 500, awarded by Deloitte, (3rd consecutive year) and is Great Place to Work Certified.

Senior Data Analyst, Product Analytics

Recharge is positioned to support the best Direct-To-Consumer ecommerce brands in the world. We are building multiple AI-based analytic products that revolutionize how our merchants leverage insight to retain and grow their business. 


We are looking for a data scientist who is value driven and passionate about providing actionable insights and helping to create data products that our product and growth teams can leverage. As a data scientist you will be working on both product analytics as well as advanced analytics projects working closely with data engineering and product to deliver value to our merchants through analytics and insights


You will be responsible for preparing data for prescriptive and predictive modeling, driving hypotheses, applying stats, and developing architecture for algorithms. 


What you’ll do

  • Live by and champion all of our core values (#ownership, #empathy, #day-one, and #humility).

  • Collaborate with stakeholders in cross-projects and team settings to identify and clarify business or product questions to answer. Provide feedback to translate and refine business questions into tractable analysis, evaluation metrics, or mathematical models.

  • Perform analysis utilizing relevant tools (e.g., SQL, Python). Provide analytical thought leadership through proactive and strategic contributions (e.g., suggests new analyses, infrastructure or experiments to drive improvements in the business).

  • Own outcomes for projects by covering problem definition, metrics development, data extraction and manipulation, visualization, creation, and implementation of analytical/statistical models, and presentation to stakeholders.

  • Develop solutions, lead, and manage problems that may be ambiguous and lacking clear precedent by framing problems, generating hypotheses, and making recommendations from a perspective that combines both, analytical and product-specific expertise.

  • Work independently to find creative solutions to difficult problems.

  • Effectively communicate analyses and experimental outcomes to business stakeholders, ensuring insights are presented with clear business context and relevance.

  • Write and maintain technical documentation for the data models and analytics solutions.
     

What you'll bring

  • Bachelor's degree ,or equivalent work experience, in Statistics, Mathematics, Data Science, Engineering, Physics, Economics, or a related quantitative field.

  • 5+ years of work experience using analytics to solve product or business problems, performing statistical analysis, and coding (e.g., Python, R, SQL) 

  • Preferred experience in leveraging LLMs to address business challenges, and familiarity with frameworks such as Langchain.

  • Experience developing and operating  within Snowflake

  • Expert in translating data findings to broader audiences including non-data stakeholders, engineering, and executive leadership to maximize impact

  • Preferred experience in dimensional modeling in dbt 

  • Experience working on advanced analytics models (machine learning or learning based models) that accomplish tasks such as making recommendations or scoring users.

  • Ability to demonstrate high self-sufficiency to take on complex problems in a timely manner

  • Consistently navigates ambiguous technical and business requirements while making flexible technical decisions

  • Consistently delivers technically challenging tasks efficiently with quality, speed, and simplicity

  • Payments and/or Ecommerce experience preferred


Our Stack

Vertex ai, Google Colab, Looker, Dbt, Snowflake, Airflow, Fivetran, CloudSQL/MySQL, Python (Pandas, NumPy, Scikit-learn) , Gitlab, Flask, Jinja, ES6, Vue.js, Saas, Webpack, Redis, Docker, GCP, Kubernetes, Helmfile, Terraform, Ansible, Nginx

Recharge | Instagram | Twitter | Facebook

Recharge Payments is an equal opportunity employer. In addition to EEO being the law, it is a policy that is fully consistent with our principles. All qualified applicants will receive consideration for employment without regard to status as a protected veteran or a qualified individual with a disability, or other protected status such as race, religion, color, national origin, sex, sexual orientation, gender identity, genetic information, pregnancy or age. Recharge Payments prohibits any form of workplace harassment. 

Transparency in Coverage

This link leads to the Anthem Blue Cross machine-readable files that are made available in response to the federal Transparency in Coverage Rule and includes network negotiated rates for all items and services; allowed amounts for OON items, services and prescription drugs; and negotiated rates and historical prices for network prescription drugs (delayed). EIN 80-6245138. This link leads to the Kaiser machine-readable files.

#LI-Remote

See more jobs at ReCharge Payments

Apply for this job

15d

(Senior) Data Engineer - France (F/M/D)

ShippeoParis, France, Remote
MLairflowsqlRabbitMQdockerkubernetespython

Shippeo is hiring a Remote (Senior) Data Engineer - France (F/M/D)

Job Description

The Data Intelligence Tribe is responsible for leveraging Shippeo’s data from our large shipper and carrier base, to build data products that help our users (shippers and carriers alike) and ML models to provide predictive insights. This tribe’s typical responsibilities are to:

  • get accurately alerted in advance of any potential delays on their multimodal flows or anomalies so that they can proactively anticipate any resulting disruptions

  • extract the data they need, get direct access to it or analyze it directly on the platform to gain actionable insights that can help them increase their operational performance and the quality and compliance of their tracking

  • provide best-in-class data quality by implementing advanced cleansing & enhancement rules

As a Data Engineer at Shippeo, your objective is to ensure that data is available and exploitable by our Data Scientists and Analysts on our different data platforms. You will contribute to the construction and maintenance of Shippeo’s modern data stack that’s composed of different technology blocks:

  • Data Acquisition (Kafka, KafkaConnect, RabbitMQ),

  • Batch data transformation (Airflow, DBT),

  • Cloud Data Warehousing (Snowflake, BigQuery),

  • Stream/event data processing (Python, docker, Kubernetes) and all the underlying infrastructure that support these use cases.

 

Qualifications

Required:

  • You have a degree (MSc or equivalent) in Computer Science.

  • 3+ years of experience as a Data Engineer.

  • Experience building, maintaining, testing and optimizing data pipelines and architectures

  • Programming skills in Python 

  • Advanced working knowledge of SQL, experience working with relational databases and familiarity with a variety of databases.

  • Working knowledge of message queuing and stream processing.

  • Advanced knowledge of Docker and Kubernetes.

  • Advanced knowledge of a cloud platform (preferably GCP).

  • Advanced knowledge of a cloud based data warehouse solution (preferably Snowflake).

  • Experience with Infrastructure as code (Terraform/Terragrunt)

  • Experience building and evolving CI/CD pipelines (Github Actions).

Desired: 

  • Experience with Kafka and KafkaConnect (Debezium).

  • Monitoring and alerting on Grafana / Prometheus.

  • Experience working on Apache Nifi.

  • Experience working with workflow management systems such as Airflow.

See more jobs at Shippeo

Apply for this job

17d

Staff ML Systems Engineer, ML Orchestration

CruiseUS Remote
MLgolangBachelor's degreenosqlairflowDesignazurec++kubernetespython

Cruise is hiring a Remote Staff ML Systems Engineer, ML Orchestration

We're Cruise, a self-driving service designed for the cities we love.

We’re building the world’s most advanced self-driving vehicles to safely connect people to the places, things, and experiences they care about. We believe self-driving vehicles will help save lives, reshape cities, give back time in transit, and restore freedom of movement for many.

In our cars, you’re free to be yourself. It’s the same here at Cruise. We’re creating a culture that values the experiences and contributions of all of the unique individuals who collectively make up Cruise, so that every employee can do their best work. 

Cruise is committed to building a diverse, equitable, and inclusive environment, both in our workplace and in our products. If you are looking to play a part in making a positive impact in the world by advancing the revolutionary work of self-driving cars, come join us. Even if you might not meet every requirement, we strongly encourage you to apply. You might just be the right candidate for us.

About the team:

The Machine Learning Orchestration team at Cruise is dedicated to owning and developing our cutting-edge workflow management platform. This platform provides a semantic orchestration framework for machine learning workflows and data processing, enabling our engineers to accelerate the development cycle and focus on enhancing the safety and performance of our autonomous vehicles.

Position Overview:

We are seeking an experiencedStaff Software Engineerto lead pivotal initiatives within our ML Orchestration team. You will play a crucial role in scaling our platform, developing automation and self-service tools for our users, and ensuring the efficient operation of ML pipelines at scale.

Note: This role is part of an infrastructure engineering team and does not involve the application of machine learning models for specific tasks. Instead, the focus is on developing infrastructure products that empower our customers to perform machine learning and data science at scale.
 

What you’ll be doing:

  • Design & Implementation: Utilize the latest cloud technologies (GCP/Azure) to design, implement, and test scalable distributed computing and data processing solutions in the cloud.

  • Project Ownership:Take ownership of technical projects from inception to completion, contribute to the product roadmap, and make informed decisions on major technical trade-offs.

  • Collaboration: Effectively engage in team planning, code reviews, and design discussions, considering the impact of projects across multiple teams while proactively managing conflicts.

  • Mentorship & Recruitment:Conduct technical interviews with calibrated standards, onboard, and mentor engineers and interns, fostering a culture of growth and knowledge sharing.
     

What you must have:

  • 8+ years of experience, with a strong background in large-scale distributed systems preferred.

  • 3+ years of experience leading and driving large-scale initiatives.

  • Proficiency in building scalable infrastructure on the cloud using Python, C++, Golang, or similar languages.

  • Experience working with relational and NoSQL databases.

  • Demonstrated ability to develop and maintain systems at scale.

  • A Bachelor’s, Master’s, or Ph.D. in Computer Science, Electrical Engineering, Mathematics, Physics, or a related field; or equivalent practical experience.

  • A passion for self-driving technology and its transformative potential.

  • Strong attention to detail and a commitment to accuracy.

  • A proven track record of efficiently solving complex problems.

  • A startup mentality with a willingness to embrace uncertainty and wear multiple hats.
     

Bonus Points:

  • Experience with Google Cloud Platform, Microsoft Azure, or Amazon Web Services

  • Experience with open-source orchestration platforms such as Kubeflow, Flyte, Airflow, etc.

  • Experience with Kubernetes

  • Understanding of Machine Learning (ML) models/pipelines

  • Python/C++/Golang proficiency

  • Relevant publications

The salary range for this positionis $175,100 - $257,500. Compensation will vary depending on location, job-related knowledge, skills, and experience. Youmay also be offered a bonus, long-term incentives, and benefits. These ranges are subject to change.

Why Cruise?

Our benefits are here to support the whole you:

  • Competitive salary and benefits 
  • Medical / dental / vision, Life and AD&D
  • Subsidized mental health benefits
  • Paid time off and holidays
  • Paid parental, medical, family care, and military leave of absence
  • 401(k) Cruise matching program 
  • Fertility benefits
  • Dependent Care Flexible Spending Account
  • Flexible Spending Account & Health Saving Account
  • Perks Wallet program for benefits/perks
  • Pre-tax Commuter benefit plan for local employees
  • CruiseFlex, our location-flexible work policy. (Learn more about CruiseFlex).

We’re Integrated

  • Through our partnerships with General Motors and Honda, we are the only self-driving company with fully integrated manufacturing at scale.

We’re Funded

  • GM, Honda, Microsoft, T. Rowe Price, and Walmart have invested billions in Cruise. Their backing for our technology demonstrates their confidence in our progress, team, and vision and makes us one of the leading autonomous vehicle organizations in the industry. Our deep resources greatly accelerate our operating speed.

Cruise LLC is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do the best work of their lives. We seek applicants of all backgrounds and identities, across race, color, caste, ethnicity, national origin or ancestry, age, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Cruise will consider for employment qualified applicants with arrest and conviction records, in accordance with applicable laws.

Cruise is committed to the full inclusion of all applicants. If reasonable accommodation is needed to participate in the job application or interview process please let our recruiting team know or emailHR@getcruise.com.

We proactively work to design hiring processes that promote equity and inclusion while mitigating bias. To help us track the effectiveness and inclusivity of our recruiting efforts, please consider answering the following demographic questions. Answering these questions is entirely voluntary. Your answers to these questions will not be shared with the hiring decision makers and will not impact the hiring decision in any way. Instead, Cruise will use this information not only to comply with any government reporting obligations but also to track our progress toward meeting our diversity, equity, inclusion, and belonging objectives. Know Your Rights: Workplace Discrimination is Illegal

In any materials you submit, you may redact or remove age-identifying information such as age, date of birth, or dates of school attendance or graduation. You will not be penalized for redacting or removing this information.

Candidates applying for roles that operate and remotely operate the AV:Licensed to drive a motor vehicle in the U.S. for the three years immediately preceding your application, currently holding an active in-state regular driver’s license or equivalent, and no more than one point on driving record. A successful completion of a background check, drug screen and DMV Motor Vehicle Record check is also required.

Note to Recruitment Agencies:Cruise does not accept unsolicited agency resumes. Furthermore, Cruise does not pay placement fees for candidates submitted by any agency other than its approved partners. 

No Application Deadline

Apply for this job

17d

Tech Lead Manager, ML Orchestration

CruiseUS Remote
MLgolangBachelor's degreeairflowDesignazurec++kubernetespython

Cruise is hiring a Remote Tech Lead Manager, ML Orchestration

We're Cruise, a self-driving service designed for the cities we love.

We’re building the world’s most advanced self-driving vehicles to safely connect people to the places, things, and experiences they care about. We believe self-driving vehicles will help save lives, reshape cities, give back time in transit, and restore freedom of movement for many.

In our cars, you’re free to be yourself. It’s the same here at Cruise. We’re creating a culture that values the experiences and contributions of all of the unique individuals who collectively make up Cruise, so that every employee can do their best work. 

Cruise is committed to building a diverse, equitable, and inclusive environment, both in our workplace and in our products. If you are looking to play a part in making a positive impact in the world by advancing the revolutionary work of self-driving cars, come join us. Even if you might not meet every requirement, we strongly encourage you to apply. You might just be the right candidate for us.

About the team:

The Machine Learning Orchestration team at Cruise is dedicated to owning and developing our cutting-edge workflow management platform. This platform provides a semantic orchestration framework for machine learning workflows and data processing, enabling our engineers to accelerate the development cycle and focus on enhancing the safety and performance of our autonomous vehicles.

Position Overview:

We are seeking an experiencedTech Lead Managerto lead a pivotal team within our ML Orchestration group. You will play a crucial role in scaling our platform, developing automation and self-service tools for our users, and ensuring the efficient operation of ML pipelines at scale.

Note: This team and role is part of an infrastructure engineering team and does not involve the application of machine learning models for specific tasks. Instead, the focus is on developing infrastructure products that empower our customers to perform machine learning and data science at scale.
 

What you’ll be doing:

  • Team Leadership:Manage and technically guide a team of engineers, providing support and resources to help them advance in their careers and achieve their professional goals.

  • Design & Implementation:Utilize the latest cloud technologies (GCP/Azure) to architect, implement, and test scalable distributed computing and data processing solutions.

  • Project Ownership:Take full ownership of technical projects from inception to completion, actively contributing to the product roadmap and making informed decisions on critical technical trade-offs.

  • Collaboration:Foster effective collaboration by engaging in team planning, code reviews, and design discussions. Assess the implications of projects across multiple teams and proactively address any conflicts that arise.

  • Hiring & Mentorship:Lead recruitment efforts by conducting technical interviews with calibrated standards, onboarding new engineers and interns, and mentoring them to cultivate a culture of growth, knowledge sharing, and continuous improvement.

  • Engineering Best Practices:Ensure the adoption and adherence to best engineering practices, maintaining a high standard of quality in all product offerings.

  • Strategic Planning:Drive strategic planning and vision-setting initiatives while establishing scalable processes for effective execution.
     

What you must have:

  • 2+ years of experience leading a team as a tech lead and/or manager

  • 8+ years of experience, with a strong background in large-scale distributed systems preferred.

  • 3+ years of experience leading and driving large-scale initiatives.

  • Proficiency in building scalable infrastructure on the cloud using Python, C++, Golang, or similar languages.

  • Demonstrated ability to develop and maintain systems at scale.

  • A Bachelor’s, Master’s, or Ph.D. in Computer Science, Electrical Engineering, Mathematics, Physics, or a related field; or equivalent practical experience.

  • A passion for self-driving technology and its transformative potential.

  • Strong attention to detail and a commitment to accuracy.

  • A proven track record of efficiently solving complex problems.

  • A startup mentality with a willingness to embrace uncertainty and wear multiple hats.
     

Bonus Points:

  • Experience with Google Cloud Platform, Microsoft Azure, or Amazon Web Services

  • Experience with open-source orchestration platforms such as Kubeflow, Flyte, Airflow, etc.

  • Experience with Kubernetes

  • Understanding of Machine Learning (ML) models/pipelines

  • Python/C++/Golang proficiency

  • Relevant publications

The salary range for this position is $180,200 - $265,000. Compensation will vary depending on location, job-related knowledge, skills, and experience. You may also be offered a bonus, long-term incentives, and benefits. These ranges are subject to change.

Why Cruise?

Our benefits are here to support the whole you:

  • Competitive salary and benefits 
  • Medical / dental / vision, Life and AD&D
  • Subsidized mental health benefits
  • Paid time off and holidays
  • Paid parental, medical, family care, and military leave of absence
  • 401(k) Cruise matching program 
  • Fertility benefits
  • Dependent Care Flexible Spending Account
  • Flexible Spending Account & Health Saving Account
  • Perks Wallet program for benefits/perks
  • Pre-tax Commuter benefit plan for local employees
  • CruiseFlex, our location-flexible work policy. (Learn more about CruiseFlex).

We’re Integrated

  • Through our partnerships with General Motors and Honda, we are the only self-driving company with fully integrated manufacturing at scale.

We’re Funded

  • GM, Honda, Microsoft, T. Rowe Price, and Walmart have invested billions in Cruise. Their backing for our technology demonstrates their confidence in our progress, team, and vision and makes us one of the leading autonomous vehicle organizations in the industry. Our deep resources greatly accelerate our operating speed.

Cruise LLC is an equal opportunity employer. We strive to create a supportive and inclusive workplace where contributions are valued and celebrated, and our employees thrive by being themselves and are inspired to do the best work of their lives. We seek applicants of all backgrounds and identities, across race, color, caste, ethnicity, national origin or ancestry, age, citizenship, religion, sex, sexual orientation, gender identity or expression, veteran status, marital status, pregnancy or parental status, or disability. Applicants will not be discriminated against based on these or other protected categories or social identities. Cruise will consider for employment qualified applicants with arrest and conviction records, in accordance with applicable laws.

Cruise is committed to the full inclusion of all applicants. If reasonable accommodation is needed to participate in the job application or interview process please let our recruiting team know or emailHR@getcruise.com.

We proactively work to design hiring processes that promote equity and inclusion while mitigating bias. To help us track the effectiveness and inclusivity of our recruiting efforts, please consider answering the following demographic questions. Answering these questions is entirely voluntary. Your answers to these questions will not be shared with the hiring decision makers and will not impact the hiring decision in any way. Instead, Cruise will use this information not only to comply with any government reporting obligations but also to track our progress toward meeting our diversity, equity, inclusion, and belonging objectives. Know Your Rights: Workplace Discrimination is Illegal

In any materials you submit, you may redact or remove age-identifying information such as age, date of birth, or dates of school attendance or graduation. You will not be penalized for redacting or removing this information.

Candidates applying for roles that operate and remotely operate the AV:Licensed to drive a motor vehicle in the U.S. for the three years immediately preceding your application, currently holding an active in-state regular driver’s license or equivalent, and no more than one point on driving record. A successful completion of a background check, drug screen and DMV Motor Vehicle Record check is also required.

Note to Recruitment Agencies:Cruise does not accept unsolicited agency resumes. Furthermore, Cruise does not pay placement fees for candidates submitted by any agency other than its approved partners. 

No Application Deadline

Apply for this job