airflow Remote Jobs

128 Results

2d

Data Platforms Solutions Architect

Accesa - RatiodataEmployees can work remotely, Romania, Remote
MLSalesDevOPSterraformscalaairflowsqlDesignazurepythonAWS

Accesa - Ratiodata is hiring a Remote Data Platforms Solutions Architect

Job Description

We are seeking an experienced Data Platforms Solutions Architect to join our team. This role goes beyond traditional architecture and has a strong emphasis on business development applications of the technology knowledge. As a Solutions Architect, you will collaborate with clients and internal teams to design and present tailored data platform solutions, custom to various customer opportunities needs.

You will leverage your hands-on experience with modern data engineering tools paired with your ability to provide strategic insights during presales engagements.

Responsibilities: 

 

  • Drive innovation in Data & AI Business Opportunities: actively contribute to business development by identifying opportunities, understanding client needs, and proposing innovative data platform solutions based on modern cloud accelerators and frameworks (e.g. Databricks, Snowflake, Synapse);
  • Tailor client focused solutions:partner with sales teams to deliver compelling technical presentations, demonstrations, and proofs of concept while being able to influence clients' strategic decisions by showcasing the value of data engineering and platform architectures tailored to their needs;
  • Lead discovery and shape solutions:Drive as a leader the Discovery Workshops with new clients, aiming on collecting the business requirements and work with the BA/PO in documenting these requirements;
  • Design operational-fit architectures: Carefully analyse all the use-cases of the data platform for provided business added value (e.g. pipelines for BI, AI driven decision-making, data mesh self-service architectures);
  • Offer technical advisory: Guide clients through tool selection and best practices in data platform design, including pitching the reasoning behind certain choices; pitch the end-to-end solution to the internal and end-customer stakeholders;
  • Advance technical expertise:Design and implement scalable, secure, and high-performance data platforms, including data lakes, warehouses, and integration pipelines;
  • Collaborate with cross-functional teams:partner with stakeholders to align business goals with technology solutions, proposing practical models to calculate the Total-cost-of-ownership (TCO);
  • Technical leadership: Implement the PoCs associated with the provided high-level solution designs and lead the mid-senior data engineers in transforming the PoC into the Production MVP.

 

Qualifications

Must have:

  • 7+ years of experience in data engineering, focusing on data governance, data lakes, pipelines, and ETL/ELT processes;
  • 3+ years of experience in a Data Platforms Solutions Architect role, leading platform design and implementation;
  • Expertise in data lake platforms like Databricks, Azure Data Lake, AWS Lake Formation, or Google BigQuery;
  • Proficiency in data processing frameworks (Apache Spark, Kafka, Flink) and orchestration tools (Airflow, ADF, AWS Step Functions);
  • Strong knowledge of security & governance (RBAC, encryption, GDPR, HIPAA compliance) and distributed storage systems (Parquet, ORC, Delta Lake);
  • Experience with cloud-native platforms (AWS, Azure, GCP) and IaC tools (Terraform, CloudFormation, or Bicep);
  • Proficiency in automation, DevOps, and observability using tools like DataDog, Prometheus, or cloud-native monitoring;
  • Strong skills in Python, SQL, or Scala for building data transformations and performance-optimized models (star, snowflake, data vault);
  • Pre-sales support experience, including RFPs, technical proposals, and business development support;
  • Strong communication & leadership skills, with the ability to engage technical and non-technical stakeholders.

Nice to have:

 

  • Experience integrating data lakes with machine learning platforms like TensorFlow, PyTorch, or cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI);
  • Familiarity with feature stores and their role in operationalizing ML (e.g., Feast, Databricks Feature Store);
  • Understanding of data modeling, data ingestion, and ETL/ELT processes tailored for AI/ML use cases;
  • A plus are the following certifications: Microsoft Certified: Azure Solutions Architect Expert, Google Professional Cloud Architect, Databricks Certified Data Engineer Professional, Snowflake Advanced Architect Certification, Certified AI Practitioner (CAIP).

Apply for this job

8d

Staff Data Engineer

SonderMindDenver, CO or Remote
S34 years of experienceterraformscalanosqlairflowsqlDesignmongodbpytestapijavac++dockerkubernetespythonAWSbackend

SonderMind is hiring a Remote Staff Data Engineer

About SonderMind

At SonderMind, we know that therapy works. SonderMind provides accessible, personalized mental healthcare that produces high-quality outcomes for patients. SonderMind's individualized approach to care starts with using innovative technology to help people not just find a therapist, but find the right, in-network therapist for them, should they choose to use their insurance. From there, SonderMind's clinicians are committed to delivering best-in-class care to all patients by focusing on high-quality clinical outcomes. To enable our clinicians to thrive, SonderMind defines care expectations while providing tools such as clinical note-taking, secure telehealth capabilities, outcome measurement, messaging, and direct booking.

To follow the latest SonderMind news, get to know our clients, and learn about what it’s like to work at SonderMind, you can follow us on Instagram, Linkedin, and Twitter. 

About the Role

In this role, you will be responsible for designing, building, and managing the information infrastructure systems used to collect, store, process, and distribute production and reporting data. This role will work closely with software and data engineers, as well as data scientists, to deploy Applied Science services. This role will also interact with business analysts and technical marketing teams to ensure they have the data necessary to complete their analyses and campaigns.

What you will do 

  • Strategically design, construct, install, test, and maintain highly scalable data management systems
  • Develop and maintain databases, data processing procedures, and pipelines
  • Integrate new data management technologies and software engineering tools into existing structures
  • Develop processes for data mining, data modeling, and data production
  • Translate complex functional and technical requirements into detailed architecture, design, and high-performing software and applications
  • Create custom software components and analytics applications
  • Troubleshoot data-related issues and perform root cause analysis to resolve them
  • Manage overall pipeline orchestration
  • Optimize data warehouse performance

What does success look like?

Success in this role will be gauged by the seamless and efficient operations of data infrastructure. This includes minimal downtime, accurate and timely data delivery and the successful implementation of new technologies and tools. The individual will have demonstrated their ability to collaborate effectively to define solutions with both technical and non-technical team members across data science, engineering, product and our core business functions. They will have made significant contributions to improving our data systems, whether through optimizing existing processes or developing innovative new solutions. Ultimately, their work will enable more informed and effective decision-making across the organization.

Who You Are 

Skills, experience, and education that is needed for this person to be able to succeed in this role 

  • Bachelor’s degree in Computer Science, Engineering, or a related field
  • Minimum 4 years experience as a Data Engineer or in a similar role
  • Proficient with scripting and programming languages (Python, Java, Scala, etc.)
  • In-depth knowledge of SQL and other database related technologies
  • Experience with Snowflake, DBT, BigQuery, Fivetran, Segment, etc.
  • Experience with AWS cloud services (S3, RDS, Redshift, etc.)
  • Experience with data pipeline and workflow management tools such as Airflow
  • Backend Development experience with the following:
    • REST API design using web frameworks such as FastAPI, Flask
    • Data modeling for microservices, especially using NoSQL databases like MongoDB
    • CI/CD pipelines (Gitlab preferred) and microservices deployment to AWS cloud
    • Docker, Kubernetes, Helm Charts, Terraform
    • Developing unit tests for microservices using testing frameworks like pytest
  • Strong negotiation and interpersonal skills: written, verbal, analytical
  • Motivated and influential – proactive with the ability to adhere to deadlines; work to “get the job done” in a fast-paced environment
  • Self-starter with the ability to multi-task

Our Benefits 

The anticipated salary range for this role will be $132,000-165,000.

As leaders in redesigning behavioral health, we walk the walk with our employees' benefits. We want the experience of working at SonderMind to accelerate people’s careers and enrich their lives, so we focus on meeting SonderMinders wherever they are and supporting them in all facets of their lives and work.

Our benefits include:

  • A commitment to fostering flexible hybrid work
  • A generous PTO policy with a minimum of three weeks off per year
  • Free therapy coverage benefits to ensure our employees have access to the care they need (must be enrolled in our medical plans to participate)
  • Competitive Medical, Dental, and Vision coverage with plans to meet every need, including HSA ($1,100 company contribution) and FSA options
  • Employer-paid short-term, long-term disability, life & AD&D to cover life's unexpected events. Not only that, we also cover the difference in salary for up to seven (7) weeks of short-term disability leave (after the required waiting period) should you need to use it.
  • Eight weeks of paid Parental Leave (if the parent also qualifies for STD, this benefit is in addition which allows between 8-16 weeks of paid leave)
  • 401K retirement plan with 100% matching which immediately vests on up to 4% of base salary
  • Travel to Denver 1x a year for annual Shift gathering
  • Fourteen (14) company holidays
  • Company Shutdown between Christmas and New Years
  • Supplemental life insurance, pet insurance coverage, commuter benefits and more!

Application Deadline

This position will be an ongoing recruitment process and will be open until filled.

Equal Opportunity 
SonderMind does not discriminate in employment opportunities or practices based on race, color, creed, sex, gender, gender identity or expression, pregnancy, childbirth or related medical conditions, religion, veteran and military status, marital status, registered domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition (including genetic information or characteristics), sexual orientation, or any other characteristic protected by applicable federal, state, or local laws.

Apply for this job

12d

Software Engineer II, Paid MarTech

InstacartCanada - Remote (ON, AB or BC Only)
SalesairflowsqlDesignpython

Instacart is hiring a Remote Software Engineer II, Paid MarTech

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

 

About the Role 

We are actively seeking a highly skilled MarTech Engineer to become a pivotal member of our cross-functional team, dedicated to propelling Instacart's growth through our Paid Marketing channels—one of the most substantial growth levers within our organization. This role offers an exciting opportunity to address intricate challenges with innovative solutions that not only enhance the user experience but also drive transformative business outcomes. As a MarTech Engineer, your contributions will play a crucial role in optimizing the efficiency and effectiveness of our Paid Marketing efforts, encompassing a diverse array of channels such as Paid Social, Display, SEM, Online Video, Linear Television, and Streaming Audio.

Your work will directly impact the scalability and performance of these channels, enabling us to reach broader audiences and amplify Instacart's market presence. By leveraging cutting-edge technology and collaborating with various departments, you will help create strategies that deliver measurable results and foster sustained growth. If you are passionate about making a tangible impact through technology and marketing innovation, we invite you to join our team and be part of redefining the grocery industry.

 

About the Team

This role will be positioned within the marketing organization, embedded directly into our Cross Channel Integrations team – the technical arm of Instacart Marketing. The Cross Channel Integrations  team is composed of forward-thinking marketing technologists, strategists and engineers committed to developing solutions that elevate Instacart's marketing channels. 

Working at the intersection of marketing and engineering, you will engage in frequent and intensive collaboration with our core engineering team, fostering a dynamic environment where technical insights and innovative solutions are readily exchanged. This integrated model empowers you to rapidly address and execute on marketing priorities while ensuring that you have full access to the comprehensive suite of tech resources, tools, and development opportunities available within Instacart. 

 

About the Job 

  • Building and maintaining data pipelines using Python and Snowflake: Design and oversee efficient data pipelines that handle extensive data processing, ensuring seamless data flow and troubleshooting issues as they arise.
  • Integrating and extracting data from multiple sources such as Google Analytics and Snowflake: Manage ETL processes for integrating diverse data sources, ensuring accurate and timely data flow into centralized systems to support decision-making.
  • Automating tasks like data retrieval and report generation: Develop scripts and workflows to automate data retrieval and report generation, utilizing APIs and scheduling tools to streamline processes and minimize manual effort.
  • Leading DBT initiatives to enhance model efficiency: Spearhead the development and optimization of data models using DBT, ensuring scalability and maintainability, while documenting best practices and providing guidance to the team.
  • Harnessing Airflow for automation tasks and configuring manual runs: Use Apache Airflow to automate and schedule complex workflows, ensuring efficient and accurate execution of data-related tasks.
  • Collaborating with marketing teams utilizing your marketing data experience: Work closely with marketing to translate data into actionable insights, providing analytical support and enhancing data-driven strategies.
  • Works with multiple third-party channels to streamline Ad conversions: Able to work closely with stakeholders across third-party channel providers and debug issues when they arise.
  • Collaborating with Marketing to launch new Paid Listing Ads across channels: Working with various channels (Snap, Google, Facebook, etc) on managing and pushing data to multiple channels and product lists for various product feed Ad consumption
  • Writing advanced SQL queries and setting up data marts: Create complex SQL queries and establish data marts, providing easy access to clean and structured data for marketing analytics.

 

About You

Minimum Qualifications

  • Python Expertise: Daily use and extensive knowledge of Python for data integration, automation, and more.
  • Snowflake Proficiency: Regular use for creating data pipelines, performing admin tasks, and managing roles.
  • Google Analytics: Experience with Google Tag Manager and generating daily reports.
  • DBT Leadership: Leading initiatives, model creation, and process documentation.
  • Airflow Experience: Daily automation and configuration of tasks.
  • Marketing Experience: Ability to work with marketing data and collaborate effectively.SQL and ETL Skills: Advanced proficiency in SQL and experience with ETL processes across Snowflake, Redshift, and PostgreSQL.
  • Collaboration Skills: Proven ability to work with various stakeholders including product, sales, and finance teams.

 

Preferred Qualifications

  • Basic understanding of digital marketing and previous experience working with marketing teams.
  • Possesses general familiarity with performance marketing KPIs (ROAS, LTV, CAC, CPA, etc).
  • Understanding of Marketing event channel operations, and knowledge of launching new channels through the above-mentioned technologies is appreciated.
  • Having previous experience working with third-party analytics/tracking platforms
  • Having previous experience managing Google Tag Manager for a Marketing or Advertising organization
  • Past knowledge within a Data Platform Engineering organization or Data Engineering role
  • Has previous experience on an on-call rotation

Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policy here. Currently, we are only hiring in the following provinces: Ontario, Alberta, British Columbia, and Nova Scotia.

Offers may vary based on many factors, such as candidate experience and skills required for the role. Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offerings here.

For Canadian based candidates, the base pay ranges for a successful candidate are listed below.

CAN
$140,000$155,000 CAD

See more jobs at Instacart

Apply for this job

13d

Mid-Level Data Engineer AWS Snowflake

Mid LevelS3LambdaagileBachelor's degree3 years of experienceairflowsqlDesignapipythonAWS

FuseMachines is hiring a Remote Mid-Level Data Engineer AWS Snowflake

Mid-Level Data Engineer AWS Snowflake - Fusemachines - Career PageSee more jobs at FuseMachines

Apply for this job

13d

Senior Data Engineer

InstacartUnited States - Remote
airflowsqlDesignbackendfrontend

Instacart is hiring a Remote Senior Data Engineer

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

Overview

At Instacart, our mission is to create a world where everyone has access to the food they love and more time to enjoy it together. Millions of customers every year use Instacart to buy their groceries online, and the Data Engineering team is building the critical data pipelines that underpin all of the myriad of ways that data is used across Instacart to support our customers and partners.

 

About the Role - 

This is a general posting for multiple Senior Data Engineer roles open across our 4-sided marketplace. You’ll get the chance to learn about the different problems Data Engineering teams solve as you go through the process. Towards the end of your process, we’ll do a team-matching exercise to determine which of the open roles/teams you’ll join. You can find a blurb on each team at the bottom of this page. 

 

About the Team -

You will be joining a growing data engineering team and will tackle some of the most challenging and impactful problems that are transforming how people buy groceries every day. You will be embedded within a data-driven team as a trusted partner in uncovering barriers in the product’s usability and utilize these insights to inform product improvements that drive angle-changing growth. We’re looking for a self-driven engineer who can hit the ground running to ultimately shape the landscape of data across the entire organization.



About the Job 

  • You will be part of a team with a large amount of ownership and autonomy.
  • Large scope for company-level impact working on financial data.
  • You will work closely with engineers and both internal and external stakeholders, owning a large part of the process from problem understanding to shipping the solution.
  • You will ship high quality, scalable and robust solutions with a sense of urgency.
  • You will have the freedom to suggest and drive organization-wide initiatives.

 

About You

Minimum Qualifications

  • 6+ years of working experience in a Data/Software Engineering role, with a focus on building data pipelines.
  • Expert with SQL and  knowledge of Python.
  • Experience building high quality ETL/ELT pipelines.
  • Past experience with data immutability, auditability, slowly changing dimensions or similar concepts.
  • Experience building data pipelines for accounting/billing purposes.
  • Experience with cloud-based data technologies such as Snowflake, Databricks, Trino/Presto, or similar.
  • Adept at fluently communicating with many cross-functional stakeholders to drive requirements and design shared datasets.
  • A strong sense of ownership, and an ability to balance a sense of urgency with shipping high quality and pragmatic solutions.
  • Experience working with a large codebase on a cross functional team.

 

Preferred Qualifications

  • Bachelor’s degree in Computer Science, computer engineering, electrical engineering OR equivalent work experience.
  • Experience with Snowflake, dbt (data build tool) and Airflow
  • Experience with data quality monitoring/observability, either using custom frameworks or tools like Great Expectations, Monte Carlo etc.

 

Open Roles

 

Finance Data Engineering 

About the Role -

The Finance data engineering team plays a critical role in defining how financial data is modeled and standardized for uniform, reliable, timely and accurate reporting. This is a high impact, high visibility role owning critical data integration pipelines and models across all of Instacart’s products. This role is an exciting opportunity to join a key team shaping the post-IPO financial data vision and roadmap for the company.

About the Team -

Finance data engineering is part of the Infrastructure Engineering pillar, working closely with accounting, billing & revenue teams to support the monthly/quarterly book close, retailer invoicing and internal/external financial reporting. Our team collaborates closely with product teams to capture critical data needed for financial use cases.

 

Growth Data and Marketing

About the Role - 

The Growth Data and Marketing team is an integral piece of Growth at Instacart, and is directly responsible for modeling the Paid Marketing world to ensure accurate and timely data. This role will be pivotal in building and maintaining data infrastructure that supports the performance and optimization of paid marketing campaigns. You will be at the center of driving data-driven decisions for paid marketing strategies.

About the Team -

The Growth Data and Marketing team is part of the Growth Systems org, working closely with Frontend and Backend Engineering, Data Scientists, and Machine Learning teams to drive and support key data decisions to dictate product, partnerships. Our team works closely with product and marketing teams to ensure the right data is used at the right time to drive key business metrics. 

 

Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

For US based candidates, the base pay ranges for a successful candidate are listed below.

CA, NY, CT, NJ
$221,000$245,000 USD
WA
$212,000$235,000 USD
OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
$203,000$225,000 USD
All other states
$183,000$203,000 USD

See more jobs at Instacart

Apply for this job

13d

Senior Business Intelligence Analyst

InstacartUnited States - Remote
tableaujiraairflowsqlDesign

Instacart is hiring a Remote Senior Business Intelligence Analyst

We're transforming the grocery industry

At Instacart, we invite the world to share love through food because we believe everyone should have access to the food they love and more time to enjoy it together. Where others see a simple need for grocery delivery, we see exciting complexity and endless opportunity to serve the varied needs of our community. We work to deliver an essential service that customers rely on to get their groceries and household goods, while also offering safe and flexible earnings opportunities to Instacart Personal Shoppers.

Instacart has become a lifeline for millions of people, and we’re building the team to help push our shopping cart forward. If you’re ready to do the best work of your life, come join our table.

Instacart is a Flex First team

There’s no one-size fits all approach to how we do our best work. Our employees have the flexibility to choose where they do their best work—whether it’s from home, an office, or your favorite coffee shop—while staying connected and building community through regular in-person events. Learn more about our flexible approach to where we work.

 

About the Role

 

We are looking for an exceptional Senior Business Intelligence Analyst to help build and manage robust data models, leverage data visualization tools to create dashboards, and partner closely with cross functional teams to design data solutions needed to enable financial reporting and operations. 

 

About the Team

 

You will be joining the Financial Systems Analytics team, which sits within the Finance department at Instacart. This team is responsible for ensuring financial data is accessible, complete, accurate, and timely for downstream consumers. As part of the team, you will be a key contributor in enabling financial data reporting, analysis, and other critical business operations within Instacart.  

 

About the Job

  • Build and regularly maintain data pipelines and models critical to Instacart’s business operation, including those used for financial reporting and analysis 
  • Partner closely with Accounting, Strategic Finance, Data Science, and other teams across the company to understand their most complex problems and develop effective data solutions, including definition and development of supporting data models and architecture
  • Contribute to the optimization, documentation, testing, and tooling efforts aimed at improving data quality and empowering data consumers across the organization
  • Regularly communicate progress, risks, and completion of projects with stakeholders, teammates, and management
  • Work closely with the Product, Data Engineering, and Business Development teams to stay current on the latest product rollouts and their data and financial impacts
  • Promote and drive a self-service data culture by developing self-service data models, building easy-to-use tools and dashboards, and teaching business users how to use them

 

About You

Minimum Qualifications

  • 5+ years of hands-on experience in BI, Data Science, or Data engineering
  • Bachelor’s Degree or equivalent
  • AdvancedSQL experience and dashboard building
  • Highly effective written and verbal communication skills
  • Proven ability to prioritize work and deliver finished products on tight deadlines
  • Ability to communicate and coordinate with cross-functional teams, gather information, perform root cause analysis, and recommend solutions to business problems
  • Positive attitude and enthusiasm for Instacart, your team, partners, and stakeholders

 

Preferred Qualifications

  • Familiarity with: Snowflake/Databricks/BigQuery or similar data warehouses, DBT/Apache Airflow or similar orchestration tools, Github, and Jira 
  • Familiarity with Visualization Tools: Mode, Tableau, or similar
  • Understanding of financial concepts, common accounting practices, and system solutions
  • Exposure to SOX compliance best practices, including practical applications and experience with ITGCs



Instacart provides highly market-competitive compensation and benefits in each location where our employees work. This role is remote and the base pay range for a successful candidate is dependent on their permanent work location. Please review our Flex First remote work policyhere.

Offers may vary based on many factors, such as candidate experience and skills required for the role.Additionally, this role is eligible for a new hire equity grant as well as annual refresh grants. Please read more about our benefits offeringshere.

For US based candidates, the base pay ranges for a successful candidate are listed below.

CA, NY, CT, NJ
$149,000$165,000 USD
WA
$142,000$158,000 USD
OR, DE, ME, MA, MD, NH, RI, VT, DC, PA, VA, CO, TX, IL, HI
$137,000$152,000 USD
All other states
$123,000$137,000 USD

See more jobs at Instacart

Apply for this job

16d

Software Engineer

RustDjangogolangredisairflowpostgressqlansiblec++dockerpostgresqlMySQLkuberneteslinuxpython

Cloudflare is hiring a Remote Software Engineer

About Us

At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

While this job can be worked almost entirely remotely, hiring is focused on the following greater metro areas (~50 miles):

  • Austin, TX
  • Atlanta, GA
  • Chicago, IL
  • Denver, CO
  • New York City
  • Seattle, WA
  • Washington, DC
  • Mexico City, MX

About the Role

An engineering role at Cloudflare provides an opportunity to address some big challenges, at scale.  We believe that with our talented team, we can solve some of the biggest security, reliability and performance problems facing the Internet. Just how big?  

  • We have in excess of 15 Terabits of network transit capacity
  • We operate 300+ Points-of-presence around the world
  • We serve more traffic than Twitter, Amazon, Apple, Instagram, Bing, & Wikipediacombined
  • Anytime we push code, it immediately affects over 200 million internet users
  • Every day, up to 20,000 new customers sign-up for Cloudflare service
  • Every week, the average Internet user touches us more than 500 times

We are looking for talented Software Engineers to build and develop the platform which makes Cloudflare customers place their trust in us.  Our Software Engineers come from a variety of technical backgrounds and have built up their knowledge working in different environments. But the common factors across all of our reliability-focused engineers include a passion for automation, scalability, and operational excellence.  Our Infrastructure Software Systems and Automation team focuses on the automation to scale our infrastructure.

Our team is well-funded and focused on building an extraordinary company.  This is a superb opportunity to join a high-performing team and scale our high-growth network as Cloudflare’s business grows.  You will build tools to constantly improve our scale and speed of deployment.  You will nurture a passion for an “automate everything” approach that makes systems failure-resistant and ready-to-scale.   

Cloudflare Software Engineers focus on automating our infrastructure installations and decommissions at scale.  We enable our Data Centre Engineering teams by allowing them to install new data centers, replace servers and networking in existing data centers as quickly and efficiently as possible while not impacting existing infrastructure and customer services.  While our focus is on automation and accurate asset tracking, there is an element of ongoing operational support of Data Center Engineers and other teams.  We also review upcoming hardware changes and update automation and configuration management to cater to these advances.

Many of our Software Engineers have had the opportunity to work at multiple offices on interim and long-term project assignments. The ideal Software Engineering candidate has strong knowledge of Python and Golang, with Rust an advantage. As we are automating server and networking installations, knowledge of Linux, Hardware and Networking is ideal.  We prefer to hire experienced candidates; however raw skill trumps experience and we welcome strong junior applicants.

Requisite Skills

  • Intermediate level software development skills in Python and Shell scripting
  • 5 years of relevant Development experience
  • Strong skills in network services, including Rest APIs and HTTP

Examples of desirable skills, knowledge and experience

  • 5 years of relevant work experience
  • Linux systems administration experience
  • Experience with Kubernetes and docker
  • Tooling and automation development experience
  • Network fundamentals DHCP, ARP, subnetting, routing, firewalls, IPv6
  • Configuration management systems such as Saltstack, Chef, Puppet or Ansible
  • SQL databases (Postgres or MySQL)
  • Time series databases (OpenTSDB, Graphite, Prometheus)
  • The ability to understand service and device metrics and visualize them using Grafana

Bonus Points

  • Experience programming in Rust, Go or with Django
  • Experience with continuous / rapid release engineering
  • Experience developing systems that are highly available and redundant across regions
  • Performance analysis and debugging with tools like perf, sar, strace, dtrace
  • Experience with the Linux kernel and Linux software packaging
  • Internetworking and BGP experience
  • Key/Value stores (Redis, KyotoTycoon, Cassandra, LevelDB)
  • Load balancing and reverse proxies such as Nginx, Varnish, HAProxy, Apache

Some tools that we use

  • Netbox
  • Apache Airflow 
  • Salt
  • Docker, Kubernetes
  • Nginx
  • Python
  • Django
  • PostgreSQL
  • Redis
  • Prometheus

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

Sound like something you’d like to be a part of? We’d love to hear from you!

This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

See more jobs at Cloudflare

Apply for this job

16d

Lead Data Scientist

DynatraceBarcelona, Spain, Remote
S3EC2LambdaairflowsqlDesignpythonAWS

Dynatrace is hiring a Remote Lead Data Scientist

Job Description

Our Business Insights team is seeking a Lead Data Scientist to drive impactful data-driven decision-making across our products and operations. In this role, you’ll leverage your advanced expertise to design and develop machine learning solutions, uncover the causal relationships in complex systems, and help our customers better understand how their clients interact with their applications. This position requires someone comfortable with independently managing challenging projects and guiding others through technical decision-making.

Key Responsibilities and Impact

  • Explore and analyse millions of rows of tabular data to uncover meaningful insights and build advanced machine-learning models.
  • Design and implement causal analysis models to assess the impact of system performance on user experience, providing clear, actionable insights into customer behaviour.
  • Develop and deploy machine learning models and workflows, transforming terabytes of traffic data into actionable insights that drive key business decisions.
  • Lead the development and deployment of models, ensuring robustness, scalability, and reliability in production environments.
  • Build automated solutions for business needs, such as bot detection using advanced machine learning and statistical methods.
  • Collaborate closely with product owners, engineers, and other stakeholders to translate analytical findings into impactful features and product improvements.
  • Take ownership of technical direction, contribute to architectural decisions, identify technical debt, and advocate for opportunities for improvement.
  • Mentor and support junior data scientists, enhancing team productivity, improving code quality, and fostering a culture of collaboration and learning.
  • Continuously monitor and enhance model performance in partnership with the engineering team, improving model impact on user experience and system effectiveness.

 

 

Qualifications

Minimum requirements: 

  • A degree in Engineering, Computer Science, Mathematics, or another quantitative field.
  • 10+ years of demonstrable/ tenured experience in data/ data science, including at least 3 years in a lead or senior IC role.
  • Expertise in causal analysis methods (e.g., propensity score matching, A/B testing, uplift modeling) with a demonstrated ability to analyse tabular data.
  • Strong experience in Python (including Pandas, NumPy, and Scikit-Learn) for data processing and machine learning model construction.
  • Proficiency in SQL, with the ability to write complex queries and optimise data retrieval from relational databases.
  • Strong communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders.

Desirable requirements:

  • Experience with large language models (LLMs) and autonomous agents, with an understanding of their practical applications and limitations.
  • Familiarity with big data technologies such as Spark or Snowpark for processing and analysing large datasets efficiently.
  • Hands-on experience working with Snowflake, particularly using Snowpark for scalable data engineering and machine learning workflows.
  • Experience with AWS services (e.g., S3, Lambda, EC2) for managing machine learning infrastructure and deploying models in a cloud-native environment.
  • Hands-on experience with data visualisation tools like Plotly, Seaborn, or other Python-based libraries to convey data insights effectively.
  • Familiarity with data pipeline orchestration tools (e.g., Airflow, Luigi) to manage ETL/ELT workflows.
  • Ability to operate in a fast-paced, dynamic environment, effectively prioritising multiple projects with competing deadlines.

See more jobs at Dynatrace

Apply for this job

16d

AI Operations Specialist

Unit4Lisbon, Portugal, Remote
MLMaster’s Degreeairflowsqlazurepython

Unit4 is hiring a Remote AI Operations Specialist

Job Description

We are looking for a highly motivated AI Operations Specialistto join our team. Ideal candidate will have expertise in managing AI and machine learning models on Snowflake. In this standalone role, you will take ownership of the operational lifecycle of AI/ML models deployed within Snowflake, ensuring their reliability, scalability, and performance. You will bridge the gap between development and production, applying advanced MLOps practices tailored to Snowflake’s data ecosystem to deliver seamless AI-powered insights to the business.

You will act as the primary point of accountability for AI operations, bridging the gap between development and production environments. Your work will directly impact the efficiency and effectiveness of AI solutions, empowering business teams to make data-driven decisions with confidence.

Key Responsibilities

1. Model Deployment and Management

  • Build and maintain deployment pipelines for AI/ML models and ensure seamless transition from development to production.
  • Collaborate with data scientists and engineers to ensure models are properly versioned, tested, and deployed.
  • Implement monitoring tools to track performance metrics like accuracy, latency, and resource utilization.

2. System Monitoring and Incident Management

  • Monitor AI systems in real time to detect anomalies, failures, or performance degradation.
  • Respond to incidents to minimize downtime and business impact.
  • Develop and maintain dashboards for key AI system performance metrics.

3. Performance Optimization

  • Analyze and improve model inference times and operational efficiency.
  • Identify bottlenecks in data pipelines and recommend solutions to optimize throughput.
  • Proactively manage cloud resources to balance cost and performance.

4. Data Management

  • Collaborate with data engineering teams to ensure the availability of high-quality data for AI systems.
  • Implement processes for automated data validation, anomaly detection, and error correction.
  • Manage data lineage and compliance requirements for AI-related workflows.

5. Continuous Improvement

  • Apply best practices to enhance AI model lifecycle management.
  • Stay updated on emerging technologies and tools for AI operations.
  • Provide feedback to improve model training, testing, and deployment workflows.

Qualifications

Must-Have Skills

  • Snowflake Expertise:
    • Hands-on experience with Snowflake, including Snowpark, virtual warehouses, and query performance optimization.
    • Proficiency in Snowflake-native ML and integration with external AI tools.
  • AI/ML Knowledge:
    • Strong foundation in AI/ML principles, with practical experience in deploying and monitoring models in production.
    • Experience working with Python and SQL, especially in Snowflake environments.
  • Operational Skills:
    • Familiarity with CI/CD pipelines tailored for AI/ML workflows.
    • Knowledge of data validation and monitoring tools for maintaining data integrity.
  • Soft Skills:
    • Self-starter with the ability to work independently and manage end-to-end responsibilities.
    • Strong analytical and problem-solving skills.
    • Clear communicator who can collaborate across teams.

Nice-to-Have Skills

  • Experience with cloud platforms (Azure) and their integration with Snowflake.
  • Certifications in Snowflake, AI/ML, or cloud platforms.
  • Familiarity with additional tools like dbt, MLFlow, or Airflow for enhanced Snowflake workflows.

Experience Requirements

  • Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field.
  • 3+ years of experience in managing AI/ML models, with at least 1 year of specific experience on Snowflake.
  • Proven ability to manage production-grade AI models within a Snowflake environment.

See more jobs at Unit4

Apply for this job

18d

ML Engineer

MLS3Lambda4 years of experienceBachelor's degreeairflowDesignapikubernetespythonAWS

FuseMachines is hiring a Remote ML Engineer

ML Engineer - Fusemachines - Career Page /* Basic CMS Settings */ .jobs-navbar, .jobboard .modal-custom .modal-header {background-color: #2074b9;} .page-header .brand-text, .page-header .brand-text a {color: #2074b9;} .page-title {color: #2074b9 !important;} #tracking-consent-banner {border-top: 1px solid #2074b9 !important} .tracking-consent-button-container button.

See more jobs at FuseMachines

Apply for this job

18d

(Senior) Data Engineer - France

ShippeoParis, France, Remote
MLairflowsqlRabbitMQdockerkubernetespython

Shippeo is hiring a Remote (Senior) Data Engineer - France

Job Description

The Data Intelligence Tribe is responsible for leveraging Shippeo’s data from our large shipper and carrier base, to build data products that help our users (shippers and carriers alike) and ML models to provide predictive insights. This tribe’s typical responsibilities are to:

  • get accurately alerted in advance of any potential delays on their multimodal flows or anomalies so that they can proactively anticipate any resulting disruptions

  • extract the data they need, get direct access to it or analyze it directly on the platform to gain actionable insights that can help them increase their operational performance and the quality and compliance of their tracking

  • provide best-in-class data quality by implementing advanced cleansing & enhancement rules

As a Data Engineer at Shippeo, your objective is to ensure that data is available and exploitable by our Data Scientists and Analysts on our different data platforms. You will contribute to the construction and maintenance of Shippeo’s modern data stack that’s composed of different technology blocks:

  • Data Acquisition (Kafka, KafkaConnect, RabbitMQ),

  • Batch data transformation (Airflow, DBT),

  • Cloud Data Warehousing (Snowflake, BigQuery),

  • Stream/event data processing (Python, docker, Kubernetes) and all the underlying infrastructure that support these use cases.

 

Qualifications

Required:

  • You have a degree (MSc or equivalent) in Computer Science.

  • 3+ years of experience as a Data Engineer.

  • Experience building, maintaining, testing and optimizing data pipelines and architectures

  • Programming skills in Python 

  • Advanced working knowledge of SQL, experience working with relational databases and familiarity with a variety of databases.

  • Working knowledge of message queuing and stream processing.

  • Advanced knowledge of Docker and Kubernetes.

  • Advanced knowledge of a cloud platform (preferably GCP).

  • Advanced knowledge of a cloud based data warehouse solution (preferably Snowflake).

  • Experience with Infrastructure as code (Terraform/Terragrunt)

  • Experience building and evolving CI/CD pipelines (Github Actions).

Desired: 

  • Experience with Kafka and KafkaConnect (Debezium).

  • Monitoring and alerting on Grafana / Prometheus.

  • Experience working on Apache Nifi.

  • Experience working with workflow management systems such as Airflow.

See more jobs at Shippeo

Apply for this job

19d

Web Analytics & Data Tracking Manager

Nile BitsCairo, Egypt, Remote
tableauairflowsqlsalesforceFirebasemobileqadockerpython

Nile Bits is hiring a Remote Web Analytics & Data Tracking Manager

Job Description

Key Responsibilities

  • Take ownership of all tag implementations in GTM and Server side GTM to feed data to tools and partners such as Snowplow, Google Analytics, Firebase, Criteo, Epsilon.
  • Working closely with Marketing teams to ensure efficient and well structured tracking code
  • Devising and owning new tracking specifications to be implemented
  • Managing all project correspondence with stakeholders
  • Experience of managing tracking implementation projects
  • Set the direction of our digital analytics strategy and enforce best practices
  • Audit the existing client-side/server-side data collection setup, identifying gaps in tracking and processes, identify inefficiencies and opportunities to improve the richness and quality of data collection at every step of the process
  • Responsible for the end to end delivery of tracking projects, this encapsulates data capture, testing/validating results and surfacing data in the data warehouse
  • Maintaining and creating documentation of tracking and processes
  • Maintaining our tracking architecture to ensure we follow best practices and reduce tech debt
  • Set up tracking monitoring processes to ensure we minimize downtime and preserve high quality data
  • Administration and maintenance of various tracking related tools including -but not limited to- Snowplow, GA4, GTM, OneTrust

 

The Deal Breakers

  • Expert technical knowledge of Google Tag Manager and its ecosystem
  • Proven experience setting up and managing complex, large-scale implementations across web and mobile
  • Experience implementing or working with clickstream data
  • Some experience with SQL
  • Comfortable with exploring large datasets, with an emphasis on event data to ensure our tracking is meeting downstream requirements
  • Good understanding of the flow of data from data collection to reporting and insights and the impacts tracking can have on business processes
  • Highly competent in translating and presenting complex technical information to a less informed audience

Qualifications

And you are…

  • A doer! Willing to step outside your comfort zone to broaden your skills and learn new technologies
  • Meticulous when it comes to devising processes, documentation and QA work
  • Proactive and highly organized, with strong time management and planning skills
  • Approachable personality, happy to help resolve ad-hoc unscheduled problems
  • Proactive, self-starter mindset; identifying elegant solutions to difficult problems and being able to suggest new and creative approaches
  • Great time management skills with the ability to identify priorities

 

Nice to have

  • Experience working with Snowplow or other event-level analytics platform is a big plus
  • Experience setting up Server Side Google Tag Manager to reduce page load times
  • Exposure to cloud based data warehousing and modelling
  • Experience setting up analytics integrations with AB testing platforms (we use Optimizely)
  • Knowledge or experience of server-side tracking implementation
  • An engineering mindset looking to leverage modern tools and technologies to drive efficiencies
  • Exposure to Python/R or similar procedural programming language

 

Our data stack

We collect data from dozens of data sources, ranging from transactional data, availability data, payments data, customer event-level data, voice-of-customer data, third party data and much much more. Our historical data runs into tens of billions of records and grows at a rate of tens of millions of records every day. Our data is extremely varied, some being very finely-grained, event-level data, other being already aggregated to various degrees. It also arrives on different schedules!

Our tracking infrastructure contains tools such as GTM, SS GTM, Snowplow, GA4.

Our data stack is Python for the data pipeline, Airflow for orchestration and Snowflake is our data warehousing technology of choice. On top of our warehouse we have Tableau to assist with standardized reporting and self service, there is also a Tableau embedding within Salesforce.

Our wider ecosystem of tools and partners includes Iterable, Docker, Branch, GA4, Salesforce, Tableau. Everything runs in AWS.

Our team culture

The data platform team is an enthusiastic group who are passionate about our profession. We are continuously maintaining our team culture via things like retrospective meetings, weekly socials, open door mentality and cross profession knowledge sharing. We adopt a fail fast mentality that promotes a safe environment for our team to upskill comfortably. Our team make up reflects the company ethos of inclusion and diversity, we are made up of a collection of different people/genders/backgrounds and celebrate our differences. Ultimately we are a team and we work as one together as one, no individual is solely responsible for any area of our pipeline, our successes and failures are shared.

 

Apply Now

See more jobs at Nile Bits

Apply for this job

19d

Senior Data Engineer

IncreasinglyBengaluru, India, Remote
airflowscrum

Increasingly is hiring a Remote Senior Data Engineer

Job Description

Are you ready to embark on a thrilling data adventure? Buckle up, because we're about to take you on a wild ride through the world of big data!

  • Become the master of NiFi flows! You'll be creating and updating pipelines like a digital wizard, managing a colossal NiFi environment that'll make your head spin (in a good way, of course).
  • Tame the Hive and wrangle Impala in our Cloudera Data Platform jungle. Your table management skills will be put to the test in this data safari!
  • Join our band of merry architects as we build the future of data processing. You'll be jamming with NiFi, Kafka, and other cool cats in our tech orchestra.
  • Put on your detective hat and dive into data analysis. You'll be uncovering insights faster than you can say "Eureka!"
  • Become a Scrum superhero! You'll leap tall backlogs in a single bound and sprint through ceremonies with the agility of a data ninja.
  • Channel your inner polyglot and become the ultimate translator between BAs and bits. Your ability to speak both business and data will make you the life of the tech party!

Get ready to have a blast while pushing the boundaries of data engineering. It's not just a job - it's a data-driven adventure!

Qualifications

  • Experience with cloud technologies as a Data Engineer
  • Technical expertise with Big Data tools and methodologies
  • Experience working with Apache Kafka; - Experience working with Hive and Impala
  • Skills working with Airflow
  • Experience working with Hadoop ecosystem (CDP/Cloudera Data Platform)
  • Hands-on experience with Data Ingestion
  • Proficiency in SQL/PostgreSQL

See more jobs at Increasingly

Apply for this job

24d

Senior Analytics Engineer

HandshakeSan Francisco, CA (hybrid)
airflowsqlDesignc++docker

Handshake is hiring a Remote Senior Analytics Engineer

Everyone is welcome at Handshake. We know diverse teams build better products and we are committed to creating an inclusive culture built on a foundation of respect for all individuals. We strongly encourage candidates from non-traditional backgrounds, historically marginalized or underrepresented groups to apply.

Want to learn more about what it's like to work at Handshake?Check out these interviews from our team members!

Your Impact

At Handshake, data is used in decision making across all areas of our business. As a Senior Analytics Engineer, you will play a pivotal role in shaping our data ecosystem by implementing scalable data models. Your expertise will empower our business stakeholders to make data-driven decisions, enabling the organization to leverage data as a strategic asset. Your contributions will enhance our ability to translate complex data needs into actionable metrics and models, driving innovation and efficiency across the organization. Your technical expertise will be instrumental in helping millions of students discover meaningful careers, irrespective of their educational background, network, or financial resources.

Your Role

In this role, you will be responsible for:

  • Design and implement scalable data models and their corresponding pipelines.

  • Collaborate closely with product and business teams to translate their data requirements into effective data models and metrics.

  • Develop and own the data ecosystem, ensuring robust data governance and high data quality.

  • Act as a leader in the data space, driving best practices and enabling data driven decisions across the product org.

Your Experience

To excel in this role, you should possess:

  • Advanced SQL skills: Strong expertise in SQL and experience with data modeling and database design conventions.

  • Proven leadership in data modeling or data governance, contributing to successful data strategy implementations.

  • Demonstrated expertise in data modeling at scale.

  • History of collaboration with product and business teams to align data initiatives with organizational goals.

  • Experience using data orchestration and data modeling tools (airflow and dbt are the tools we use at Handshake)

  • Cloud platform proficiency: Hands-on experience with cloud-based data technologies, preferably Google Cloud Platform (GCP), including BigQuery, DataFlow, BigTable, and more

Bonus Areas of Expertise

While not required, expertise in any of the following areas would be highly advantageous:

  • Experience with Apache Spark for large-scale data processing and knowledge of when to utilize big data tooling.

  • BI tool knowledge: Specifically Looker and Hex

  • Containerization and orchestration: Familiarity with containerization technologies like Docker and container orchestration platforms like Kubernetes.

Compensation range

$192,222-$213,580

For cash compensation, we set standard ranges for all U.S.-based roles based on function, level, and geographic location, benchmarked against similar stage growth companies. In order to be compliant with local legislation, as well as to provide greater transparency to candidates, we share salary ranges on all job postings regardless of desired hiring location. Final offer amounts are determined by multiple factors, including geographic location as well as candidate experience and expertise, and may vary from the amounts listed above.

About us

Handshake is the career platform for Gen Z. With a community of over 17 million students, alumni, employers, and career educators, Handshake’s network is where career advice and discovery turn into first, second, and third jobs. Nearly 1 million companies use Handshake to build their future workforce—from Fortune 500 to federal agencies, school districts to startups, healthcare systems to small businesses. Handshake is built for where you’re going, not where you’ve been.

When it comes to our workforce strategy, we’ve thought deeply about how work-life should look at Handshake. With our hybrid-work model, employees benefit from collaboration and shared team experiences three days per week in our vibrant offices, and enjoy the flexibility of remote work two days per week. Handshake is headquartered in San Francisco, with offices in New York, London, and Berlin.

What we offer

At Handshake, we'll give you the tools to feel healthy, happy and secure.

Benefits below apply to employees in full-time positions.

  • ???? Equity and ownership in a fast-growing company.
  • ???? 16 Weeks of paid parental leave for birth giving parents & 10 weeks of paid parental leave for non-birth giving parents.
  • ???? Comprehensive medical, dental, and vision policies including LGTBQ+ Coverage. We also provide resources for Mental Health Assistance, Employee Assistance Programs and counseling support.
  • ???? Handshake offers $500/£360 home office stipend for you to spend during your first 3 months to create a productive and comfortable workspace at home.
  • ???? Generous learning & development opportunities and an annual $2,000/£1,500/€1,850 stipend for you to grow your skills and career.
  • ???? Financial coaching through Origin to help you through your financial journey.
  • ???? Monthly internet stipend and a brand new MacBook to allow you to do your best work.
  • ???? Monthly commuter stipend for you to expense your travel to the office (for office-based employees).
  • ???? Free lunch provided twice a week across all offices.
  • ???? Referral bonus to reward you when you bring great talent to Handshake.

(US-specific benefits, in addition to the first section)

  • ???? 401k Match: Handshake offers a dollar-for-dollar match on 1% of deferred salary, up to a maximum of $1,200 per year.
  • ???? All full-time US-based Handshakers are eligible for our flexible time off policy to get out and see the world. In addition, we offer 8 standardized holidays, and 2 additional days of flexible holiday time off. Lastly, we have a Winter #ShakeBreak, a one-week period of Collective Time Off.
  • ???? Family support: We partner with Milk Stork to provide comprehensive 100% employer-sponsored lactation support to traveling parents and guardians. Parental leave coaching and support provided by Parentaly.

(UK-specific benefits, in addition to the first section) 

  • ???? Pension Scheme: Handshake will provide you with a workplace pension, where you will make contributions based on 5% of your salary. Handshake will pay the equivalent of 3% towards your pension plan, subject to qualifying earnings limits.
  • ???? Up to 25 days of vacation to encourage people to reset, recharge, and refresh, in addition to 8 bank holidays throughout the year.
  • ???? Regular offsites each year to bring the team together + opportunity to travel to our HQ in San Francisco.
  • ????️ Discounts across various high street retailers, cinemas and other social activities exclusively for Handshake UK employees.

(Germany-specific benefits, in addition to the first section)

  • ???? 25 days of annual leave + 5 days of a winter #ShakeBreak, a one-week period of Collective Time Off across the company.
  • ???? Regular offsites each year to bring the team together + opportunity to travel to our HQ in San Francisco once a year.
  • ???? Urban sports club membership offering access to a diverse network of fitness and wellness facilities.
  • ????️ Discounts across various high street retailers, cinemas and other social activities exclusively for Handshake Germany employees.

Looking for more? Explore our mission, values and comprehensive US benefits at joinhandshake.com/careers.

Handshake is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or reasonable accommodation, please reach out to us at people-hr@joinhandshake.com.

See more jobs at Handshake

Apply for this job

26d

Senior Data Engineer

ZegoLondon Area,England,United Kingdom, Remote Hybrid
MLterraformairflowsqlDesigndockerpostgresqlkubernetespythonAWSbackend

Zego is hiring a Remote Senior Data Engineer

About Zego

At Zego, we understand that traditional motor insurance holds good drivers back. It's too complicated, too expensive, and it doesn't reflect how well you actually drive. Since 2016, we have been on a mission to change that by offering the lowest priced insurance for good drivers.

From van drivers and gig workers to everyday car drivers, our customers are the driving force behind everything we do. We've sold tens of millions of policies and raised over $200 million in funding. And we’re only just getting started.

Overview of the Data Engineering team: 

At Zego the Data Engineering team is integral to our data platform, working closely with Software Engineers, Data Scientists and Data Analysts along with other areas of the business. We use a variety of internal and external tooling to maintain our data repositories. We are looking for people who have a solid understanding of ETL and ELT paradigms, are comfortable using Python and SQL, hold an appreciation for good software engineering and data infrastructure principles, are eager to work with complex and fast growing datasets, reflect a strong desire to learn and are able to communicate well.

Our stack involves but is not limited to Airflow, Data Build Tool (DBT), a multitude of AWS services, Stitch and CICD pipelines. As a Data Engineer you will have the opportunity to promote emerging technologies where they can add value to the business and promote better ways of working.

It is an exciting time to join, and you’ll partner with world class engineers, analysts and product managers to help make Zego the best loved insurtech in the world.

Over the next 12 months you will:

  • Assist in developing and maintaining our ETL and ELT pipelines.
  • Support our data scientists in the development and implementation of our ML pricing models and experiments.
  • Help drive and evolve the architecture of our data ecosystem.
  • Collaborate with product managers and across teams to bring new products and features to the market.
  • Help drive data as a product, by growing our data platform with a focus on strong data modelling, quality, usage and efficiency.
  • Build tailored data replication pipelines as our backend application is broken into microservices.

About you

We are looking for somebody with a working knowledge of building data pipelines and the underlying infrastructure. Experience in data warehouse design undertakings, following best practices during implementation is a big plus. You have worked with (or are keen to do so) Data Analysts, Data Scientists and Software Engineers.

Practical knowledge of (or strong desire to learn) the following or similar technologies:

  • Python
  • Airflow
  • Databases (PostgreSQL)
  • Data Warehousing (Redshift / Snowflake)
  • SQL (We use DBT for modelling data in the warehouse)
  • Data Architecture including Dimensional Modelling
  • Experience in using infrastructure as code tools (e.g. Terraform)

Otherwise an interest in learning these, with the support of the team, is essential. We're looking for people with a commitment to building, nurturing, and iterating on an ever-evolving data ecosystem.

Other beneficial skills include:

  • Familiarity with Docker and/or Kubernetes (EKS)
  • Implementation / Contribution to building a Data Lake or Data Mesh
  • Having worked with a wide variety of AWS services
  • Open Table Formats (e.gr. Apache Iceberg)

How we work

We believe that teams work better when they have time to collaborate and space to get things done. We call it Zego Hybrid.

Our hybrid way of working is unique. We don't mandate fixed office days. Instead, we foster a flexible approach that empowers every Zegon to perform at their best. We ask you to spend at least one day a week in our central London office. You have the flexibility to choose the day that works best for you and your team. We cover the costs for all company-wide events (3 per year), and also provide a separate hybrid contribution to help pay towards other travel costs. We think it’s a good mix of collaborative face time and flexible home-working, setting us up to achieve the right balance between work and life.

Benefits

We reward our people well. Join us and you’ll get a market-competitive salary, private medical insurance, company share options, generous holiday allowance, and a whole lot of wellbeing benefits. And that’s just for starters.

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, marital status, or disability status.

See more jobs at Zego

Apply for this job

30d

Senior Analytics Engineer

Up LearnLondon,England,United Kingdom, Remote
airflowpython

Up Learn is hiring a Remote Senior Analytics Engineer

Are you looking for a way to reinvent the way the world learns? Do you want to establish best practices in analytics on a modern data stack? Are you excited about being a key part of a growing data team? Up Learn may be the right place for you.  You will be helping to lay Up Learn’s foundations for scale and contributing to a data practice that is helping to tackle one of the society’s most meaningful problems: Education.

About us

Up Learn has built the world’s most effective learning experience. We’ve done this by combining cognitive science, instructional theory and artificial intelligence.

Our mission is to create the most effective learning experiences in the world, and distribute access to as many students as possible ????

Up Learn started with A Levels and developed courses that are:

  1. Effective: 97% of students that complete Up Learn courses achieve an A*/A, starting from grades as low as Ds and Es
  2. Engaging: 23.5 million hours of learning thanks to Up Learn, and rising
  3. Scaling: tens of thousands of students use Up Learn today, either independently, or through one of our 400 schools, university or charity partners

Up Learn has been growing fast, and is backed by investors that share their vision, including leading venture capital firm Forward Partners and the Branson family (Virgin). Social impact is critical to Up Learn’s mission - for every student that pays, Up Learn gives a full scholarship to a student who can’t. We are growing our incredible, 40+ strong team.

Our data stack

Our data stack is built on the following tools, with a particular emphasis on leveraging open-source technologies:

  • Google Cloud Platform for all of our analytics infrastructure
  • dbt and BigQuery for our data modelling and warehousing
  • Python and Streamlit for data science and analysis
  • gitlab for version control and CI/CD
  • Lightdash for BI/dashboards
  • Airflow for orchestration

You should have:

  • Excellent SQL knowledge with strong hands-on data modelling and data warehousing skills
  • Strong attention to detail in order to highlight and address data quality issues
  • Experience in designing and managing data tools and infrastructure
  • Great time management and proactive problem-solving abilities in order to meet deadlines
  • Strong communication and data presentation skills through the use of effective data visualisation and BI tools, e.g. Looker, Tableau, Power BI, etc

You should be:

  • Self-motivated, responsible and technology-driven individual who performs well both independently and as a team member
  • Excited about learning - enthusiastic to try learn something new, and then apply it
  • Effective at building strong, influential relationships with colleagues and partners, with demonstrated success in delivering impactful analytics to stakeholders

Bonus points for:

  • Having used dbt in a business environment
  • Demonstrable track record in mentorship and educating non-technical stakeholders
  • Exposure to Python for data manipulation and analysis

What we offer

Up Learn offers generous remuneration, equity share options, and a fun, friendly, high-calibre team that trusts you and gives you the freedom to be brilliant.

You will have the chance to define the future of education and make a meaningful contribution to the lives of thousands of students, and:

Remuneration

  • ???? A competitive salary
  • ???? Employer-matched pension
  • ???? Perks scheme offering discounts & rewards at 30,000+ brands including up to 55% off cinema tickets

Health & Wellbeing

  • ???? Level 6 (highest level) dental insurance
  • ???? Significantly enhanced maternity and paternity leave
  • ???? Cycle-to-Work: we are registered so you can buy a bike and accessories tax-free
  • ???? Eye test & glasses reimbursement
  • ???? Company library: we have hundreds of books in our company library, topped up monthly with the most highly requested books. You can borrow a book whenever you like
  • ???? Unlimited budget for any work-related books you need
  • ???? Emergency support salary advance
  • ???? Mental health first aiders
  • ????‍????‍???? Family access to Up Learn: your family and close relatives get unlimited access to any Up Learn course for free!

Time

  • ????️ Minimum 35 days of paid holiday per year made up of: 26 days of bookable holiday, plus UK bank holidays, plus unlimited ‘extra days’ (i.e. if you need a few more days, no problem)
  • ???? Ability to work remotely for longer periods
  • ????️ Flexible working hours
  • ⭐ 1 fully paid day for volunteering at a charity or not-for-profit of your choice each year

Social

  • ???? Annual company off-site where we get out of the city and take a break together
  • ???? Free sporting activities like 5-a-side football games, lunch-time jogs, badminton games, paid-for monthly CrossFit sessions
  • ☕ Unlimited delicious coffee (high-end coffee beans) at the office, tea selection and other soft drinks, plus unlimited snacks and fresh fruit
  • ???? Weekly ‘Friday celebrations’ with a huge range of drinks, from craft beer to frozen margaritas, alongside soft drinks, smoothies, and fruit juice
  • ☕ Paid for coffee breaks (a great chance to get to know the team)
  • ???? Regular team outings like go-karting and skiing

All in addition to

  • Influence, trust and impact inside a well-funded VC-backed startup that's scaling
  • A spacious and bright private office in Old Street, with delicious coffee, a selection of teas and unlimited snacks and drinks

Our Core Values

  • Live for Learning - We are open-minded and have a never-quenched thirst for learning, expanding our experiences, getting feedback, iterating and improving
  • Strive for Consistent Excellence - We hold an extremely high standard, pay attention to the details and take pride in consistency
  • Objective and Rational - We think from first principles, avoid biases, use believability, regulate our emotions and are obligated to dissent when we disagree
  • Relentlessly Resourceful - We are honey badgers, we don’t compromise, we work smart and get the job done
  • Caring and Compassionate - We demonstrate care and compassion for ourselves, each other and for students

How to apply

If this sounds like it’s for you, we can’t wait to hear from you!

Use the Apply button below to send us your CV and tell us in 150 words or less why you’d be great for this role.

Inviting someone to join our team is a big deal for us and we put a lot of care and effort into the process, whilst making it take as little of your time as possible. If we figure out we’re not perfect for each other at any stage we’ll let you know quickly and make sure we provide you with feedback (if you want it!).

See more jobs at Up Learn

Apply for this job

30d

Senior Data Engineer

BrazeRemote - Ontario
SalesBachelor's degreeairflowsqlDesignkubernetes

Braze is hiring a Remote Senior Data Engineer

At Braze, we have found our people. We’re a genuinely approachable, exceptionally kind, and intensely passionate crew.

We seek to ignite that passion by setting high standards, championing teamwork, and creating work-life harmony as we collectively navigate rapid growth on a global scale while striving for greater equity and opportunity – inside and outside our organization.

To flourish here, you must be prepared to set a high bar for yourself and those around you. There is always a way to contribute: Acting with autonomy, having accountability and being open to new perspectives are essential to our continued success. Our deep curiosity to learn and our eagerness to share diverse passions with others gives us balance and injects a one-of-a-kind vibrancy into our culture.

If you are driven to solve exhilarating challenges and have a bias toward action in the face of change, you will be empowered to make a real impact here, with a sharp and passionate team at your back. If Braze sounds like a place where you can thrive, we can’t wait to meet you.

WHAT YOU’LL DO

Join our dynamic team dedicated to revolutionizing data infrastructure and products for impactful decision-making at Braze. We collaboratively shape data engineering strategies, optimizing data pipelines and architecture to drive business growth and enhance customer experiences.

Responsibilities:

  • Lead the design, implementation, and monitoring of scalable data pipelines and architectures using tools like Snowflake and dbt
  • Develop and maintain robust ETL processes to ensure high-quality data ingestion, transformation, and storage
  • Collaborate closely with data scientists, analysts, and other engineers to design and implement data solutions that drive customer engagement and retention
  • Optimize and manage data flows and integrations across various platforms and applications
  • Ensure data quality, consistency, and governance by implementing best practices and monitoring systems
  • Work extensively with large-scale event-level data, aggregating and processing it to support business intelligence and analytics
  • Implement and maintain data products using advanced techniques and tools
  • Collaborate with cross-functional teams including engineering, product management, sales, marketing, and customer success to deliver valuable data solutions
  • Continuously evaluate and integrate new data technologies and tools to enhance our data infrastructure and capabilities

WHO YOU ARE

The ideal candidate for this role possesses:

  • 5+ years of hands-on experience in data engineering, cloud data warehouses, and ETL development, preferably in a customer-facing environment
  • Proven expertise in designing and optimizing data pipelines and architectures
  • Strong proficiency in advanced SQL and data modeling techniques
  • A track record of leading impactful data projects from conception to deployment
  • Effective collaboration skills with cross-functional teams and stakeholders
  • In-depth understanding of technical architecture and data flow in a cloud-based environment
  • Ability to mentor and guide junior team members on best practices for data engineering and development
  • Passion for building scalable data solutions that enhance customer experiences and drive business growth
  • Strong analytical and problem-solving skills, with a keen eye for detail and accuracy
  • Extensive experience working with and aggregating large event-level data
  • Familiarity with data governance principles and ensuring compliance with industry regulations
  • Prefer, but don’t require, experience with Kubernetes for container orchestration and Airflow for workflow management

 #LI-Remote

WHAT WE OFFER
 
Details of these benefits plan will be provided if a candidate receives an offer of employment. Benefits may vary by locationfind out more here.
From offering comprehensive benefits to fostering flexible environments, we’ve got you covered so you can prioritize work-life harmony.
  • Competitive compensation that may include equity
  • Retirement and Employee Stock Purchase Plans
  • Flexible paid time off
  • Comprehensive benefit plans covering medical, dental, vision, life, and disability
  • Family services that include fertility benefits and equal paid parental leave
  • Professional development supported by formal career pathing, learning platforms, and tuition reimbursement
  • Community engagement opportunities throughout the year, including an annual company wide Volunteer Week
  • Employee Resource Groups that provide supportive communities within Braze
  • Collaborative, transparent, and fun culture recognized as a Great Place to Work®

ABOUT BRAZE

Braze is a leading customer engagement platform that powers lasting connections between consumers and brands they love. Braze allows any marketer to collect and take action on any amount of data from any source, so they can creatively engage with customers in real time, across channels from one platform. From cross-channel messaging and journey orchestration to Al-powered experimentation and optimization, Braze enables companies to build and maintain absolutely engaging relationships with their customers that foster growth and loyalty.

Braze is proudly certified as a Great Place to Work® in the U.S., the UK and Singapore. In 2024, we ranked #3 on Great Place to Work UK’s Best Workplaces (Large), #3 on Fortune Best Workplaces for Parents (Small and Medium), #13 on Great Place to Work UK’s Best Workplaces for Development (Large), #14 on Great Place to Work UK’s Best Workplaces for Wellbeing (Large), #14 on Fortune Best Workplaces in Technology (Small and Medium), #26 in Great Place to Work UK’s Best Workplaces for Women (Large), #31 in Fortune Best Workplaces (Medium), and #37 in Fortune Best Workplaces for Women.

We were also featured in the Top 10% of US News & World Best Companies to Work For, Top 100 Great Place to Work UK’s Best Workplaces in Europe (Medium), and in Built In’s Best Places to Work.

You’ll find many of us at headquarters in New York City or around the world in Austin, Berlin, Bucharest, Chicago, Dubai, Jakarta, London, Paris, San Francisco, Singapore, São Paulo, Seoul, Sydney and Tokyo – not to mention our employees in nearly 50 remote locations.

BRAZE IS AN EQUAL OPPORTUNITY EMPLOYER

At Braze, we strive to create equitable growth and opportunities inside and outside the organization.

Building meaningful connections is at the heart of everything we do, and that includes our recruiting practices. We're committed to offering all candidates a fair, accessible, and inclusive experience – regardless of age, color, disability, gender identity, marital status, maternity, national origin, pregnancy, race, religion, sex, sexual orientation, or status as a protected veteran. When applying and interviewing with Braze, we want you to feel comfortable showcasing what makes you you.

We know that sometimes different circumstances can lead talented people to hesitate to apply for a role unless they meet 100% of the criteria. If this sounds familiar, we encourage you to apply, as we’d love to meet you.

Please see ourCandidate Privacy Policyfor more information on how Braze processes your personal information during the recruitment process and, if applicable based on your location, how you can exercise any privacy rights.

See more jobs at Braze

Apply for this job

+30d

Network Reliability Engineer

RustairflowDesignansiblemetalc++dockerkuberneteslinuxpython

Cloudflare is hiring a Remote Network Reliability Engineer

About Us

At Cloudflare, we are on a mission to help build a better Internet. Today the company runs one of the world’s largest networks that powers millions of websites and other Internet properties for customers ranging from individual bloggers to SMBs to Fortune 500 companies. Cloudflare protects and accelerates any Internet application online without adding hardware, installing software, or changing a line of code. Internet properties powered by Cloudflare all have web traffic routed through its intelligent global network, which gets smarter with every request. As a result, they see significant improvement in performance and a decrease in spam and other attacks. Cloudflare was named to Entrepreneur Magazine’s Top Company Cultures list and ranked among the World’s Most Innovative Companies by Fast Company. 

We realize people do not fit into neat boxes. We are looking for curious and empathetic individuals who are committed to developing themselves and learning new skills, and we are ready to help you do that. We cannot complete our mission without building a diverse and inclusive team. We hire the best people based on an evaluation of their potential and support them throughout their time at Cloudflare. Come join us! 

Hiring Locations: Austin Texas, Atlanta, Denver, New York City, San Francisco, Seattle, or Washington D.C.

About the Role (or What you'll do)

Cloudflare operates a large global network spanning hundreds of cities (data centers). You will join a team of talented network engineers who are building software solutions to improve network resilience and reduce operational toil.
This position will be responsible for the technical operation and engineering of the Cloudflare's core data center network, including the planning, installation and management of the hardware and software as well as the day-to-day operations of the network. The core network supports our critical internal needs such as databases, high volume logging, and internal application clusters. This is an opportunity to be part of the team that is building a high­-performance network that is accessible to any web property online.

You will build tools to automate operational tasks, streamline deployment processes and provide a platform for other engineering teams to build upon. You will nurture a passion for an “automate everything” approach that makes systems failure-resistant and ready-to-scale. Furthermore, you will be required to play a key role in system design and demonstrate the ability to bring an idea from design all the way to production.

 

Examples of desirable skills, knowledge and experience

  • 5+ years of relevant Network/Site Reliability Engineering experience
  • BA/BS in Computer Science or equivalent experience
  • Solid foundation on configuration management frameworks: Saltstack, Ansible, Chef
  • Experience with NX-OS, JUNOS, EOS, Cumulus, or Sonic Network Operating Systems 
  • Solid Linux systems administration experience
  • Linux networking - iproute2, Traffic Control, Devlink, etc. 
  • Strong software development skills in Go and Python

Bonus Points

  • Deep knowledge of BGP and other routing protocols
  • Workflow Management (AirFlow, Temporal)
  • Open Source Routing Daemons (FRR, Bird, GoBGP)
  • Experience with bare metal switching
  • Experience with network programming in C, C++ or rust
  • Experience with the Linux kernel and Linux software packaging
  • Strong tooling and automations development experience
  • Time series databases (Prometheus, Grafana, Thanos, Clickhouse) 
  • Other Tools - Kubernetes, Docker, Prometheus, Consul

Compensation

Compensation may be adjusted depending on work location and level. 

  • For Colorado-based hires: Estimated annual salary of $137,000 - $187,000.
  • For New York City-based and California (excluding Bay Area) and Washington hires: Estimated annual salary of $154,000- $208,000.
  • For Bay Area-based hires: Estimated annual salary of $162,000 - $218,000.

Equity

This role is eligible to participate in Cloudflare’s equity plan.

Benefits

Cloudflare offers a complete package of benefits and programs to support you and your family.  Our benefits programs can help you pay health care expenses, support caregiving, build capital for the future and make life a little easier and fun!  The below is a description of our benefits for employees in the United States, and benefits may vary for employees based outside the U.S.

Health & Welfare Benefits

  • Medical/Rx Insurance
  • Dental Insurance
  • Vision Insurance
  • Flexible Spending Accounts
  • Commuter Spending Accounts
  • Fertility & Family Forming Benefits
  • On-demand mental health support and Employee Assistance Program
  • Global Travel Medical Insurance

Financial Benefits

  • Short and Long Term Disability Insurance
  • Life & Accident Insurance
  • 401(k) Retirement Savings Plan
  • Employee Stock Participation Plan

Time Off

  • Flexible paid time off covering vacation and sick leave
  • Leave programs, including parental, pregnancy health, medical, and bereavement leave

What Makes Cloudflare Special?

We’re not just a highly ambitious, large-scale technology company. We’re a highly ambitious, large-scale technology company with a soul. Fundamental to our mission to help build a better Internet is protecting the free and open Internet.

Project Galileo: We equip politically and artistically important organizations and journalists with powerful tools to defend themselves against attacks that would otherwise censor their work, technology already used by Cloudflare’s enterprise customers--at no cost.

Athenian Project: We created Athenian Project to ensure that state and local governments have the highest level of protection and reliability for free, so that their constituents have access to election information and voter registration.

1.1.1.1: We released 1.1.1.1to help fix the foundation of the Internet by building a faster, more secure and privacy-centric public DNS resolver. This is available publicly for everyone to use - it is the first consumer-focused service Cloudflare has ever released. Here’s the deal - we don’t store client IP addresses never, ever. We will continue to abide by our privacy commitmentand ensure that no user data is sold to advertisers or used to target consumers.

Sound like something you’d like to be a part of? We’d love to hear from you!

This position may require access to information protected under U.S. export control laws, including the U.S. Export Administration Regulations. Please note that any offer of employment may be conditioned on your authorization to receive software or technology controlled under these U.S. export laws without sponsorship for an export license.

Cloudflare is proud to be an equal opportunity employer.  We are committed to providing equal employment opportunity for all people and place great value in both diversity and inclusiveness.  All qualified applicants will be considered for employment without regard to their, or any other person's, perceived or actual race, color, religion, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship, age, physical or mental disability, medical condition, family care status, or any other basis protected by law.We are an AA/Veterans/Disabled Employer.

Cloudflare provides reasonable accommodations to qualified individuals with disabilities.  Please tell us if you require a reasonable accommodation to apply for a job. Examples of reasonable accommodations include, but are not limited to, changing the application process, providing documents in an alternate format, using a sign language interpreter, or using specialized equipment.  If you require a reasonable accommodation to apply for a job, please contact us via e-mail athr@cloudflare.comor via mail at 101 Townsend St. San Francisco, CA 94107.

See more jobs at Cloudflare

Apply for this job

+30d

Senior Data Engineer

GeminiRemote (USA)
remote-firstairflowsqlDesigncsskubernetespythonjavascript

Gemini is hiring a Remote Senior Data Engineer

About the Company

Gemini is a global crypto and Web3 platform founded by Tyler Winklevoss and Cameron Winklevoss in 2014. Gemini offers a wide range of crypto products and services for individuals and institutions in over 70 countries.

Crypto is about giving you greater choice, independence, and opportunity. We are here to help you on your journey. We build crypto products that are simple, elegant, and secure. Whether you are an individual or an institution, we help you buy, sell, and store your bitcoin and cryptocurrency. 

At Gemini, our mission is to unlock the next era of financial, creative, and personal freedom.

In the United States, we have a flexible hybrid work policy for employees who live within 30 miles of our office headquartered in New York City and our office in Seattle. Employees within the New York and Seattle metropolitan areas are expected to work from the designated office twice a week, unless there is a job-specific requirement to be in the office every workday. Employees outside of these areas are considered part of our remote-first workforce. We believe our hybrid approach for those near our NYC and Seattle offices increases productivity through more in-person collaboration where possible.

The Department: Data

The Role: Senior Data Engineer

As a member of our data engineering team, you'll deliver high quality work while solving challenges that impact the whole or part of the team's data architecture. You'll update yourself with recent advances in Big data space and provide solutions for large-scale applications aligning with team's long term goals. Your work will help resolve complex problems with identifying root causes, documenting the solutions, and implementing Operations excellence (Data auditing, validation, automation, maintainability) in mind. Communicating your insights with leaders across the organization is paramount to success.

Responsibilities:

  • Design, architect and implement best-in-class Data Warehousing and reporting solutions
  • Lead and participate in design discussions and meetings
  • Mentor data engineers and analysts
  • Design, automate, build, and launch scalable, efficient and reliable data pipelines into production using Python
  • Build real-time data and reporting solutions
  • Design, build and enhance dimensional models for Data Warehouse and BI solutions
  • Research new tools and technologies to improve existing processes
  • Develop new systems and tools to enable the teams to consume and understand data more intuitively
  • Partner with engineers, project managers, and analysts to deliver insights to the business
  • Perform root cause analysis and resolve production and data issues
  • Create test plans, test scripts and perform data validation
  • Tune SQL queries, reports and ETL pipelines
  • Build and maintain data dictionary and process documentation

Minimum Qualifications:

  • 5+ years experience in data engineering with data warehouse technologies
  • 5+ years experience in custom ETL design, implementation and maintenance
  • 5+ years experience with schema design and dimensional data modeling
  • Experience building real-time data solutions and processes
  • Advanced skills with Python and SQL are a must
  • Experience with one or more MPP databases(Redshift, Bigquery, Snowflake, etc)
  • Experience with one or more ETL tools(Informatica, Pentaho, SSIS, Alooma, etc)
  • Strong computer science fundamentals including data structures and algorithms
  • Strong software engineering skills in any server side language, preferable Python
  • Experienced in working collaboratively across different teams and departments
  • Strong technical and business communication

Preferred Qualifications:

  • Kafka, HDFS, Hive, Cloud computing, machine learning, text analysis, NLP & Web development experience is a plus
  • Experience with Continuous integration and deployment
  • Knowledge and experience of financial markets, banking or exchanges
  • Web development skills with HTML, CSS, or JavaScript
It Pays to Work Here
 
The compensation & benefits package for this role includes:
  • Competitive starting salary
  • A discretionary annual bonus
  • Long-term incentive in the form of a new hire equity grant
  • Comprehensive health plans
  • 401K with company matching
  • Paid Parental Leave
  • Flexible time off

Salary Range: The base salary range for this role is between $136,000 - $170,000 in the State of New York, the State of California and the State of Washington. This range is not inclusive of our discretionary bonus or equity package. When determining a candidate’s compensation, we consider a number of factors including skillset, experience, job scope, and current market data.

At Gemini, we strive to build diverse teams that reflect the people we want to empower through our products, and we are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Equal Opportunity is the Law, and Gemini is proud to be an equal opportunity workplace. If you have a specific need that requires accommodation, please let a member of the People Team know.

#LI-JS3

Apply for this job

+30d

Sr. Engineer II, Analytics

MLagiletableauairflowsqlgitc++pythonbackend

hims & hers is hiring a Remote Sr. Engineer II, Analytics

Hims & Hers Health, Inc. (better known as Hims & Hers) is the leading health and wellness platform, on a mission to help the world feel great through the power of better health. We are revolutionizing telehealth for providers and their patients alike. Making personalized solutions accessible is of paramount importance to Hims & Hers and we are focused on continued innovation in this space. Hims & Hers offers nonprescription products and access to highly personalized prescription solutions for a variety of conditions related to mental health, sexual health, hair care, skincare, heart health, and more.

Hims & Hers is a public company, traded on the NYSE under the ticker symbol “HIMS”. To learn more about the brand and offerings, you can visit hims.com and forhers.com, or visit our investor site. For information on the company’s outstanding benefits, culture, and its talent-first flexible/remote work approach, see below and visit www.hims.com/careers-professionals.

​​About the Role:

We're looking for a savvy and experienced Senior Analytics Engineerto build seamless data products in collaboration with our data engineering, analytics, engineering, business, and product management teams.

You Will:

  • Take the data products to the next level by developing scalable data models 
  • Manage transformations of data after load of raw data through both technical processes and business logic
  • Create an inventory of the data sources and documents needed to implement self-service analytics
  • Define quality standards of the data and partner with the analytics team to define minimum acceptance criteria for the data sources
  • Data cataloging & documentation of the data sources
  • Regularly meet with business partners and analytics teams to understand and solve data needs, short-term and medium-term
  • Build trust with internal stakeholders to encourage data-driven decision-making
  • Work with all organizations to continually grow the value of our data products by onboarding new data from our backend and 3rd party system

You Have:

  • 8+ years of experience with SQL, preferably for data transformation or analytical use cases
  • 4+ years of experience building scalable data models for analytical and BI purposes
  • 3+ years of solid experience with dbt
  • Mastery of data warehouse methodologies and techniques from transactional databases to dimensional data modeling, to wide denormalized data marts
  • Solid experience with BI tools like Tableau and Looker
  • Experience using version control (command-line, Git)
  • Familiarity with one of the data warehouses like Google Big Query, Snowflake, Redshift, Databricks
  • Domain expertise in one or more of the Finance, Product, Marketing, Operations, Customer Experience
  • Demonstrated experience engaging and influencing senior leaders across functions, including an ability to communicate effectively with both business and technical teams
  • Strong analytical and quantitative skills with the ability to use data and metrics to back up assumptions and recommendations to drive actions
  • Ability to articulate vision, mission, and objectives, and change the narrative appropriate to the audience
  • Experience working with management to define and measure KPIs and other operating metrics
  • Understanding of SDLC and Agile frameworks
  • Project management skills and a demonstrated ability to work autonomously

Nice to Have:

  • Experience working in telehealth or e-commerce
  • Previous working experience at startups
  • Knowledge of Python programming
  • Knowledge of Airflow, and modern data stack (Airflow, Databricks, dbt, Fivetran, Tableau / Looker)
  • ML training model development

Our Benefits (there are more but here are some highlights):

  • Competitive salary & equity compensation for full-time roles
  • Unlimited PTO, company holidays, and quarterly mental health days
  • Comprehensive health benefits including medical, dental & vision, and parental leave
  • Employee Stock Purchase Program (ESPP)
  • Employee discounts on hims & hers & Apostrophe online products
  • 401k benefits with employer matching contribution
  • Offsite team retreats

#LI-Remote

Outlined below is a reasonable estimate of H&H’s compensation range for this role for US-based candidates. If you're based outside of the US, your recruiter will be able to provide you with an estimated salary range for your location.

The actual amount will take into account a range of factors that are considered in making compensation decisions, including but not limited to skill sets, experience and training, licensure and certifications, and location. H&H also offers a comprehensive Total Rewards package that may include an equity grant.

Consult with your Recruiter during any potential screening to determine a more targeted range based on location and job-related factors.

An estimate of the current salary range is
$150,000$180,000 USD

We are focused on building a diverse and inclusive workforce. If you’re excited about this role, but do not meet 100% of the qualifications listed above, we encourage you to apply.

Hims considers all qualified applicants for employment, including applicants with arrest or conviction records, in accordance with the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance, the California Fair Chance Act, and any similar state or local fair chance laws.

It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.

Hims & Hers is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, please contact us at accommodations@forhims.com and describe the needed accommodation. Your privacy is important to us, and any information you share will only be used for the legitimate purpose of considering your request for accommodation. Hims & Hers gives consideration to all qualified applicants without regard to any protected status, including disability. Please do not send resumes to this email address.

For our California-based applicants – Please see our California Employment Candidate Privacy Policy to learn more about how we collect, use, retain, and disclose Personal Information. 

See more jobs at hims & hers

Apply for this job